Exam Topics Extracted Questions

Question 10

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 10 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 10
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your team needs to obtain a unified log view of all development cloud projects in your SIEM. The development projects are under the NONPROD organization folder with the test and pre-production projects. The development projects share the ABC-BILLING billing account with the rest of the organization.
Which logging export strategy should you use to meet the requirements?

  • A. 1. Export logs to a Cloud Pub/Sub topic with folders/NONPROD parent and includeChildren property set to True in a dedicated SIEM project. 2. Subscribe SIEM to the topic.
  • B. 1. Create a Cloud Storage sink with billingAccounts/ABC-BILLING parent and includeChildren property set to False in a dedicated SIEM project. 2. Process Cloud Storage objects in SIEM.
  • C. 1. Export logs in each dev project to a Cloud Pub/Sub topic in a dedicated SIEM project. 2. Subscribe SIEM to the topic.
  • D. 1. Create a Cloud Storage sink with a publicly shared Cloud Storage bucket in each project. 2. Process Cloud Storage objects in SIEM.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
xhova
Highly Voted 5 years ago
Answer is A. https://cloud.google.com/logging/docs/export/aggregated_sinks
upvoted 34 times
Ishu_awsguy
2 years, 4 months ago
with this you would also be getting logs for Preprod and other environments under the folder. Hence A is eliminated. Answer should be C
upvoted 9 times
civilizador
1 year, 8 months ago
But that is exactly what requiremnets says in the question. ALL development projects. Now we have 2 tomorrow we are going to have 10 . Clearly answer is A
upvoted 1 times
...
...
ppandher
1 year, 5 months ago
This property "includeChildren parameter to True" as per your above link will route logs from folder, billing accounts + Projects -- I think that's not a Unified View of logs ?
upvoted 1 times
...
...
TNT87
Highly Voted 4 years, 1 month ago
To use the aggregated sink feature, create a sink in a Google Cloud organization or folder and set the sink's includeChildren parameter to True. That sink can then export log entries from the organization or folder, plus (recursively) from any contained folders, billing accounts, or projects. You can use the sink's filter to specify log entries from projects, resource types, or named logs. https://cloud.google.com/logging/docs/export/aggregated_sinks so the Ans is A
upvoted 9 times
...
BPzen
Most Recent 4 months, 4 weeks ago
Selected Answer: A
By setting the parent resource to folders/NONPROD and includeChildren to True, you specifically capture logs from all projects within the NONPROD folder (test and pre-production). This avoids collecting logs from other parts of the organization.
upvoted 1 times
...
Mr_MIXER007
7 months, 2 weeks ago
Selected Answer: A
Answer is A.
upvoted 3 times
...
3d9563b
8 months, 2 weeks ago
Selected Answer: A
Centralized Export: By exporting logs at the folder level with includeChildren set to True, you centralize the logging export process. This setup ensures that all logs from the relevant projects under the NONPROD folder are captured without needing individual setups for each project. Real-Time Processing: Using a Cloud Pub/Sub topic allows for real-time log export to your SIEM, which is beneficial for timely log analysis and monitoring.
upvoted 1 times
...
Sayl007_
1 year ago
It can't be C because exporting logs from each development project individually is more complex to manage and requires subscribing your SIEM to multiple topics.
upvoted 1 times
...
dija123
1 year ago
Selected Answer: A
Answer is A
upvoted 2 times
...
nccdebug
1 year, 1 month ago
Option C suggests exporting logs to individual Cloud Pub/Sub topics for each dev project, which may not provide a unified view of all development projects' logs.
upvoted 1 times
...
ppandher
1 year, 6 months ago
As per my understanding the Folder NON PROD has three Projects test,nonprod & dev. The questions unified logs from dev only, setting Children properties on FOLDER will extract logs from other two projects which we do not want . so export logs from dev is only solution here - Correct me if I am wrong here ?
upvoted 4 times
...
Xoxoo
1 year, 6 months ago
Selected Answer: A
Option A is the recommended logging export strategy to meet the requirements: A. Export logs to a Cloud Pub/Sub topic with folders/NONPROD parent and includeChildren property set to True in a dedicated SIEM project. Subscribe SIEM to the topic. Here's why this option is suitable: It exports logs from all development cloud projects under the NONPROD organization folder, ensuring a unified view. The use of the "includeChildren" property set to True allows you to capture logs from all child projects within the folder hierarchy. Exporting logs to a Cloud Pub/Sub topic provides a scalable and real-time way to stream logs to an external system like your SIEM. Subscribing the SIEM to the Pub/Sub topic enables it to consume and process the logs effectively.
upvoted 2 times
Xoxoo
1 year, 6 months ago
Option B may work but is less efficient because it exports logs separately from each project and relies on Cloud Storage, which may not be as real-time as Pub/Sub for log streaming. Option C would require configuring exports individually for each dev project, which can be cumbersome to manage and doesn't provide a unified view without additional aggregation. Option D is not recommended because it involves creating publicly shared Cloud Storage buckets in each project, which can lead to security and access control issues. It's also less centralized than using Pub/Sub for log export.
upvoted 1 times
...
...
283c101
1 year, 11 months ago
Answer is C
upvoted 3 times
...
iftikhar_ahmed
2 years ago
Answer should be C. please refer the below link https://cloud.google.com/logging/docs/export/configure_export_v2#managing_sinks
upvoted 3 times
...
shetniel
2 years, 1 month ago
Selected Answer: C
1. They require a unified view of all Dev projects - didn't however mention pre-prod and test otherwise A would have been the right one. Hence C seems to be more accurate.
upvoted 3 times
...
marrechea
2 years, 2 months ago
Selected Answer: A
Definitely A
upvoted 4 times
...
DA95
2 years, 3 months ago
Option B is not correct because setting the includeChildren property to False will exclude the test and pre-production projects from the log export. Option C is not correct because it would require you to create a separate Cloud Pub/Sub topic for each development project, which would not meet the requirement to obtain a unified log view of all development projects. Option D is not correct because using a publicly shared Cloud Storage bucket would not provide a secure way to store and access the logs. It is generally not recommended to use publicly shared Cloud Storage buckets for storing sensitive data such as logs.
upvoted 1 times
...
PST21
2 years, 3 months ago
You can create aggregated sinks for Google Cloud folders and organizations. Because neither Cloud projects nor billing accounts contain child resources, you can't create aggregated sinks for those. which means logs will be for the folder and contains non dev entries as well Ans -C
upvoted 1 times
...
PST21
2 years, 3 months ago
You can create aggregated sinks for Google Cloud folders and organizations. Because neither Cloud projects nor billing accounts contain child resources, you can't create aggregated sinks for those. So ans has to be c
upvoted 2 times
...

Question 11

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 11 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 11
Topic #: 1
[All Professional Cloud Security Engineer Questions]

A customer needs to prevent attackers from hijacking their domain/IP and redirecting users to a malicious site through a man-in-the-middle attack.
Which solution should this customer use?

  • A. VPC Flow Logs
  • B. Cloud Armor
  • C. DNS Security Extensions
  • D. Cloud Identity-Aware Proxy
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
ESP_SAP
Highly Voted 2 years, 4 months ago
Correct Answer is (C): DNSSEC — use a DNS registrar that supports DNSSEC, and enable it. DNSSEC digitally signs DNS communication, making it more difficult (but not impossible) for hackers to intercept and spoof. Domain Name System Security Extensions (DNSSEC) adds security to the Domain Name System (DNS) protocol by enabling DNS responses to be validated. Having a trustworthy Domain Name System (DNS) that translates a domain name like www.example.com into its associated IP address is an increasingly important building block of today’s web-based applications. Attackers can hijack this process of domain/IP lookup and redirect users to a malicious site through DNS hijacking and man-in-the-middle attacks. DNSSEC helps mitigate the risk of such attacks by cryptographically signing DNS records. As a result, it prevents attackers from issuing fake DNS responses that may misdirect browsers to nefarious websites. https://cloud.google.com/blog/products/gcp/dnssec-now-available-in-cloud-dns
upvoted 15 times
...
Kameswara
Highly Voted 1 year, 10 months ago
C. Attackers can hijack this process of domain/IP lookup and redirect users to a malicious site through DNS hijacking and man-in-the-middle attacks. DNSSEC helps mitigate the risk of such attacks by cryptographically signing DNS records. As a result, it prevents attackers from issuing fake DNS responses that may misdirect browsers to nefarious websites.
upvoted 5 times
...
AzureDP900
Most Recent 5 months, 1 week ago
C is right
upvoted 2 times
...
GCP72
7 months, 2 weeks ago
Selected Answer: C
The correct answer is C
upvoted 3 times
...
minostrozaml2
1 year, 2 months ago
Took the tesk today, only 5 question from this dump, the rest are new questions.
upvoted 2 times
...
shreenine
1 year, 6 months ago
C is the correct answer indeed.
upvoted 3 times
...
sc_cloud_learn
1 year, 10 months ago
C. DNSSEC is the ans
upvoted 2 times
...
ASG
2 years, 1 month ago
Its man in the middle attack protection. The traffic first needs to reach cloud armour before you can make use of cloud armour related protection. DNS can be hijacked if you dont use DNSSEC. Its your DNS that needs to resolve the initial request before traffic is directed to cloud armour. Option C is most appropriate measure. (think of sequencing of how traffic will flow)
upvoted 3 times
...
bolu
2 years, 2 months ago
The answers from rest of the folks are complete unreliable. The right answer is Cloud Armor based on my Hands-On labs in Qwiklabs. Reason: Creating a policy in Cloud Armor sends 403 forbidden message for man-in-the middle-attack. Reference: https://cloud.google.com/blog/products/identity-security/identifying-and-protecting-against-the-largest-ddos-attacks Some more: https://cloud.google.com/armor Refer this lab: https://www.qwiklabs.com/focuses/1232?catalog_rank=%7B%22rank%22%3A1%2C%22num_filters%22%3A0%2C%22has_search%22%3Atrue%7D&parent=catalog&search_id=8696512
upvoted 2 times
KyubiBlaze
1 year, 7 months ago
No, C is the correct answer.
upvoted 1 times
...
...
[Removed]
2 years, 5 months ago
Ans - C
upvoted 2 times
...
saurabh1805
2 years, 6 months ago
DNSEC is the thing, Option C
upvoted 2 times
...
MohitA
2 years, 7 months ago
C, Yes for sure DNSSEC
upvoted 2 times
...
bigdo
2 years, 8 months ago
C DNSSEC
upvoted 2 times
...
ArizonaClassics
2 years, 8 months ago
Option C is Perfect. DNSSECURITY!
upvoted 2 times
...
KILLMAD
3 years, 1 month ago
I agree it's C
upvoted 1 times
...

Question 12

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 12 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 12
Topic #: 1
[All Professional Cloud Security Engineer Questions]

A customer deploys an application to App Engine and needs to check for Open Web Application Security Project (OWASP) vulnerabilities.
Which service should be used to accomplish this?

  • A. Cloud Armor
  • B. Google Cloud Audit Logs
  • C. Web Security Scanner
  • D. Anomaly Detection
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Tabayashi
Highly Voted 2 years, 5 months ago
Answer is (C). Web Security Scanner supports categories in the OWASP Top Ten, a document that ranks and provides remediation guidance for the top 10 most critical web application security risks, as determined by the Open Web Application Security Project (OWASP). https://cloud.google.com/security-command-center/docs/concepts-web-security-scanner-overview#detectors_and_compliance
upvoted 10 times
...
tia_gll
Most Recent 6 months ago
Selected Answer: C
The correct answer is C
upvoted 1 times
...
[Removed]
1 year, 2 months ago
Selected Answer: C
Security Scanner is the correct answer however it's now part of "Security Command Center". So technically it should say "Security Command Center" however "C" is the closest option.
upvoted 4 times
...
GCP72
2 years, 1 month ago
Selected Answer: C
The correct answer is C
upvoted 3 times
...
PopeyeTheSailorMan
2 years, 2 months ago
This is called DAST (Dynamic Application Security Testing) through tools such as BurpSuite,ZAP in normal non-cloud deployments but the same has been done through web security scanner in GCP hence my answer is C
upvoted 2 times
...

Question 13

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 13 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 13
Topic #: 1
[All Professional Cloud Security Engineer Questions]

A customer's data science group wants to use Google Cloud Platform (GCP) for their analytics workloads. Company policy dictates that all data must be company-owned and all user authentications must go through their own Security Assertion Markup Language (SAML) 2.0 Identity Provider (IdP). The
Infrastructure Operations Systems Engineer was trying to set up Cloud Identity for the customer and realized that their domain was already being used by G Suite.
How should you best advise the Systems Engineer to proceed with the least disruption?

  • A. Contact Google Support and initiate the Domain Contestation Process to use the domain name in your new Cloud Identity domain.
  • B. Register a new domain name, and use that for the new Cloud Identity domain.
  • C. Ask Google to provision the data science manager's account as a Super Administrator in the existing domain.
  • D. Ask customer's management to discover any other uses of Google managed services, and work with the existing Super Administrator.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
syllox
Highly Voted 3 years, 11 months ago
Ans :D
upvoted 12 times
...
TNT87
Highly Voted 4 years, 5 months ago
The answer is A "This domain is already in use" If you receive this message when trying to sign up for a Google service, it might be because: You recently removed this domain from another managed Google account. It can take 24 hours (or 7 days if you purchased your account from a reseller) before you can use the domain with a new account. You or someone in your organization already created a managed Google account with your domain. Try resetting the administrator password and we’ll send an email to the secondary email you provided when you signed up, telling you how to access the account. You’re using the domain with another managed Google account that you own. If so, remove the domain from the other account. Contact us If none of these applies, the previous owner of your domain might have signed up for a Google service. Fill out this form and the Support team will get back to you within 48 hours.
upvoted 9 times
lollo1234
3 years, 12 months ago
Answer is D - there is no evidence that the account is lost, or similar. In a large corp it is very possible that someone (the IT org) has registered with google, and the Data science Department simply haven't been given access to it yet.
upvoted 20 times
[Removed]
1 year, 8 months ago
Agreed.
upvoted 1 times
...
...
...
Sundar_Pichai
Most Recent 8 months, 3 weeks ago
Selected Answer: D
Least amount of disruption would mean working with the existing super admin
upvoted 1 times
...
[Removed]
1 year, 8 months ago
Selected Answer: D
"D" is the most sensible option. The other options would be forms of escalation if D was not possible.
upvoted 4 times
...
shetniel
2 years, 1 month ago
Selected Answer: D
If the domain is already in use by Google Workspace (GSuite); then there is no need of setting up Cloud Identity again. The least disruptive way would be to work with the existing super administrator. Domain contestation form is required when you need to reclaim the domain or recover the super admin access. This might break a few things if not planned correctly.
upvoted 5 times
...
mahi9
2 years, 1 month ago
Selected Answer: D
Ans: D is viable option
upvoted 2 times
...
Sammydp202020
2 years, 2 months ago
Answer : A Here's why --> https://support.google.com/a/answer/6286258?hl=en When the form is launched > opens a google ticket. Therefore, A is the appropriate answer to this Q
upvoted 2 times
...
Ballistic_don
2 years, 2 months ago
Ans :D
upvoted 1 times
...
shayke
2 years, 3 months ago
Selected Answer: A
A is the right ans
upvoted 1 times
...
GCP72
2 years, 7 months ago
Selected Answer: D
The answer is D
upvoted 1 times
...
Ksrp
3 years, 1 month ago
its A , https://support.google.com/a/answer/6286258?hl=en#:~:text=If%20you%20get%20an%20alert,that%20you%20don't%20manage.
upvoted 1 times
...
idtroo
4 years ago
Answer is D. https://support.google.com/cloudidentity/answer/7389973 If you're an existing Google Workspace customer Follow these steps to sign up for Cloud Identity Premium: Using your administrator account, sign in to the Google Admin console at admin.google.com. From the Admin console Home page, at the top left, click Menu ""and thenBillingand thenGet more services. Click Cloud Identity. Next to Cloud Identity Premium, click Start Free Trial. Follow the guided instructions.
upvoted 7 times
...
TNT87
4 years, 1 month ago
Sorry Ans is D
upvoted 5 times
...
CloudTrip
4 years, 1 month ago
A, B are definitely not the answer for this. Most of you are aligned with D but can somebody explain what is wrong with C ? Their domain is already used by the G-Suite. It will be least disruptive also.
upvoted 1 times
[Removed]
1 year, 8 months ago
Also, you would only go to Google to override if there is no admin at your company.
upvoted 1 times
...
lollo1234
3 years, 12 months ago
Principle of least privilege - should the 'data science manager' be a superadmin?? Probably not. Hence D, work with the existing admin - we assume that they were chosen sensibly.
upvoted 5 times
...
...
ronron89
4 years, 4 months ago
I think its D. @SomabrataPani: did you pass this exam yet?
upvoted 2 times
...
[Removed]
4 years, 5 months ago
Ans - D
upvoted 2 times
...
saurabh1805
4 years, 6 months ago
D is best answer here.
upvoted 2 times
...

Question 14

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 14 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 14
Topic #: 1
[All Professional Cloud Security Engineer Questions]

A business unit at a multinational corporation signs up for GCP and starts moving workloads into GCP. The business unit creates a Cloud Identity domain with an organizational resource that has hundreds of projects.
Your team becomes aware of this and wants to take over managing permissions and auditing the domain resources.
Which type of access should your team grant to meet this requirement?

  • A. Organization Administrator
  • B. Security Reviewer
  • C. Organization Role Administrator
  • D. Organization Policy Administrator
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
ffdd1234
Highly Voted 4 years, 2 months ago
Answer A > Its the only one that allow you to manage permissions on the projects answer B > dont have any iam set permission so is not correct C > organizationRoleAdmin let you only create custom roles, you cant assign it to anyone ( so with thisone you cant manage permissions just create roles) D> org policyes are for manage the ORG policies constrains , that is not about project permissions, for me the correct is A
upvoted 29 times
...
zanhsieh
Highly Voted 4 years, 3 months ago
C. After carefully review this link: https://cloud.google.com/iam/docs/understanding-roles my opinion is based on 'the least privilege' practice, that future domain shall not get granted automatically: A - Too broad permissions. The question asked "The business unit creates a Cloud Identity domain..." does not imply your team should be granted for ALL future domain(s) (domain = folder) permission management. B - Security Reviewer does not have "set*" permission. All this role could do is just looking, not management. C - The best answer so far. Only the domain current created and underneath iam role assignment as well as change. D - Too broad permissions on the organization level. In other words, this role could make policy but future domains admin could hijack the role names / policies to do not desired operations.
upvoted 12 times
zzaric
3 years ago
C - can't do a job - they have to manage the IAP permissions, C doesn't have setIAM permissions and the role is only for creating Custom Roles - see the permissions that it contains: iam.roles.create iam.roles.delete iam.roles.get iam.roles.list iam.roles.undelete iam.roles.update resourcemanager.organizations.get resourcemanager.organizations.getIamPolicy resourcemanager.projects.get resourcemanager.projects.getIamPolicy resourcemanager.projects.list
upvoted 6 times
zzaric
3 years ago
IAM - not IAP - typo
upvoted 1 times
...
...
Loved
2 years, 4 months ago
"If you have an organization associated with your Google Cloud account, the Organization Role Administrator role enables you to administer all custom roles in your organization", it can not be C
upvoted 2 times
...
...
PankajKapse
Most Recent 6 months, 3 weeks ago
Selected Answer: A
as mentioned by ffdd1234's answer
upvoted 1 times
...
dija123
1 year ago
Selected Answer: A
A. Organization Administrato
upvoted 1 times
...
okhascorpio
1 year, 5 months ago
gpt says both A. and C can be used. I don't know, too many similar answers, cant say for certain which one is correct answer anymore. How can one pass the exam like this????
upvoted 1 times
...
aliounegdiop
1 year, 7 months ago
A. Organization Administrator Here's why: Organization Administrator: This role provides full control over all resources and policies within the organization, including permissions and auditing. It allows your team to manage permissions, policies, and configurations at the organizational level, making it the most appropriate choice when you need comprehensive control. Security Reviewer: This role focuses on reviewing and assessing security configurations but doesn't grant the level of control needed for managing permissions and auditing at the organizational level. Organization Role Administrator: This role allows management of IAM roles at the organization level but doesn't provide control over policies and auditing. Organization Policy Administrator: This role allows for the management of organization policies, but it doesn't cover permissions and auditing.
upvoted 3 times
...
elad17
1 year, 11 months ago
Selected Answer: A
A is the only role that gives you management permissions and not just viewing / role editing.
upvoted 4 times
...
Ishu_awsguy
2 years, 2 months ago
i would go with A. Audit of all domain resources might have a very broad scope and C might not have those permissions. Because it is audit , i believe its a responsible job so A can be afforded
upvoted 2 times
...
GCP72
2 years, 7 months ago
Selected Answer: C
The correct answer is C
upvoted 1 times
...
Medofree
3 years ago
Answer is A, among the 4, it is the only role able de manage permissions
upvoted 3 times
...
Lancyqusa
3 years, 3 months ago
The answer must be A - check out the example that allows the CTO to setup permissions for the security team: https://cloud.google.com/iam/docs/job-functions/auditing#scenario_operational_monitoring
upvoted 2 times
...
OSNG
3 years, 7 months ago
Its A. They are looking for Domain Resources Management i.e. Projects, Folders, Permissions. and only Organization Administrator is the only option allows it. Moreover, Organization Administrator is the only option that falls under "Used IN: Resource Manager" roles/resourcemanager.organizationAdmin
upvoted 1 times
...
[Removed]
4 years ago
C is the answer. Here are the permissions available to organizationRoleAdmin iam.roles.create iam.roles.delete iam.roles.undelete iam.roles.get iam.roles.list iam.roles.update resourcemanager.projects.get resourcemanager.projects.getIamPolicy resourcemanager.projects.list resourcemanager.organizations.get resourcemanager.organizations.getIamPolicy There are sufficient as per least privilege policy. You can do user management as well as auditing.
upvoted 5 times
[Removed]
4 years ago
link - https://cloud.google.com/iam/docs/understanding-custom-roles
upvoted 1 times
...
...
DebasishLowes
4 years ago
Ans : D. As it's related to Resources, so definitely policy comes into picture.
upvoted 1 times
...
HateMicrosoft
4 years, 1 month ago
Correct is D https://cloud.google.com/resource-manager/docs/organization-policy/overview
upvoted 2 times
...
BhupalS
4 years, 3 months ago
Role Permissions roles/iam.organizationRoleAdmin iam.roles.create iam.roles.delete iam.roles.undelete iam.roles.get iam.roles.list iam.roles.update resourcemanager.projects.get resourcemanager.projects.getIamPolicy resourcemanager.projects.list resourcemanager.organizations.get resourcemanager.organizations.getIamPolicy
upvoted 1 times
...
FatCharlie
4 years, 4 months ago
The confusion here, in my opinion, is that the question is asking for the ability to manage roles & audit _DOMAIN_ resources. Domain resources in the GCP hierarchy are folders & projects, because those are the only things that can be directly under an Organization (aka Domain). The Organization Role Admin is the option that gives you the ability to manage custom roles & list folders & projects.
upvoted 5 times
...

Question 15

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 15 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 15
Topic #: 1
[All Professional Cloud Security Engineer Questions]

An application running on a Compute Engine instance needs to read data from a Cloud Storage bucket. Your team does not allow Cloud Storage buckets to be globally readable and wants to ensure the principle of least privilege.
Which option meets the requirement of your team?

  • A. Create a Cloud Storage ACL that allows read-only access from the Compute Engine instance's IP address and allows the application to read from the bucket without credentials.
  • B. Use a service account with read-only access to the Cloud Storage bucket, and store the credentials to the service account in the config of the application on the Compute Engine instance.
  • C. Use a service account with read-only access to the Cloud Storage bucket to retrieve the credentials from the instance metadata.
  • D. Encrypt the data in the Cloud Storage bucket using Cloud KMS, and allow the application to decrypt the data with the KMS key.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Medofree
Highly Voted 2 years ago
Selected Answer: C
Correct ans is C. The credentials are retrieved from the metedata server
upvoted 13 times
...
ESP_SAP
Highly Voted 3 years, 4 months ago
Correct Answer is (B): If your application runs inside a Google Cloud environment that has a default service account, your application can retrieve the service account credentials to call Google Cloud APIs. Such environments include Compute Engine, Google Kubernetes Engine, App Engine, Cloud Run, and Cloud Functions. We recommend using this strategy because it is more convenient and secure than manually passing credentials. Additionally, we recommend you use Google Cloud Client Libraries for your application. Google Cloud Client Libraries use a library called Application Default Credentials (ADC) to automatically find your service account credentials. ADC looks for service account credentials in the following order: https://cloud.google.com/docs/authentication/production#automatically
upvoted 13 times
ChewB666
3 years, 4 months ago
Hello guys! Does anyone have the rest of the questions to share? :( I can't see the rest of the issues because of the subscription.
upvoted 3 times
...
[Removed]
8 months, 3 weeks ago
Interestingly, the link you listed recommends using an attached service account. Attached service accounts use the metadata server to get credentials for the service. Reference: https://cloud.google.com/docs/authentication/application-default-credentials#attached-sa
upvoted 3 times
[Removed]
8 months, 3 weeks ago
ADC tries to get credentials for attached service account from the environment variable first, then a "well-known location for credentials" (AKA Secret Manager) and then the metadata server. There is no reference for application configuration (i.e. code). Which makes "B" invalid and "C" the correct choice. https://cloud.google.com/docs/authentication/application-default-credentials#attached-sa
upvoted 2 times
...
...
...
okhascorpio
Most Recent 5 months, 3 weeks ago
A. Although it would work, but it is less preferred method and are error prone. B. Storing credentials in config is not good idea. C. Is preferred method as applications can get credentials from instance metadata securely. D. does not suggest controlled access, only encryption.
upvoted 2 times
...
ArizonaClassics
6 months, 3 weeks ago
C. Use a service account with read-only access to the Cloud Storage bucket to retrieve the credentials from the instance metadata.
upvoted 2 times
...
[Removed]
8 months, 3 weeks ago
Selected Answer: C
The answer is "C" because it references the preferred method for attaching a service account to an application. The following page explains the preferred method for setting up a service account and attaching it to an application (where a metadata server is used to store credentials). https://cloud.google.com/docs/authentication/application-default-credentials#attached-sa
upvoted 2 times
...
1br4in
10 months, 2 weeks ago
correct is B: Utilizzare un service account con accesso in sola lettura al bucket di Cloud Storage e archiviare le credenziali del service account nella configurazione dell'applicazione sull'istanza di Compute Engine. Utilizzando un service account con accesso in sola lettura al bucket di Cloud Storage, puoi fornire all'applicazione le credenziali necessarie per leggere i dati dal bucket. Archiviando le credenziali del service account nella configurazione dell'applicazione sull'istanza di Compute Engine, garantisce che solo l'applicazione su quell'istanza abbia accesso alle credenziali e, di conseguenza, al bucket. Questa opzione offre il principio del privilegio minimo, in quanto il service account ha solo i permessi necessari per leggere i dati dal bucket di Cloud Storage e le credenziali sono limitate all'applicazione specifica sull'istanza di Compute Engine. Inoltre, non richiede l'accesso globale ai bucket di Cloud Storage o l'utilizzo di autorizzazioni di accesso di rete basate su indirizzo IP.
upvoted 1 times
...
mahi9
1 year, 1 month ago
Selected Answer: C
C is the most viable option
upvoted 2 times
...
Meyucho
1 year, 5 months ago
Selected Answer: A
A CORRECT: It's the only answer when you use ACL to filter local IP's addresses and you can have the bucket without global access. B INCORRET: Doesn't use the least privilege principle. C INCORRECT: What credentials are we talking about!? To do this it's better option B. D INCORRECT: Need global access.
upvoted 3 times
gcpengineer
10 months, 3 weeks ago
no.its not a soln
upvoted 1 times
...
...
dat987
1 year, 5 months ago
Selected Answer: B
meta data do not set service account
upvoted 2 times
[Removed]
8 months, 3 weeks ago
Application Default Credentials (ADC) is responsible for providing applications with credentials of the attached service account. ".. If ADC does not find credentials it can use in either the GOOGLE_APPLICATION_CREDENTIALS environment variable or the well-known location for Google Account credentials, it uses the metadata server to get credentials..." https://cloud.google.com/docs/authentication/application-default-credentials#attached-sa
upvoted 2 times
...
...
GCP72
1 year, 7 months ago
Selected Answer: C
The correct answer is C
upvoted 2 times
...
[Removed]
2 years ago
B If the environment variable GOOGLE_APPLICATION_CREDENTIALS is set, ADC uses the service account key or configuration file that the variable points to. https://cloud.google.com/docs/authentication/production#automatically
upvoted 1 times
[Removed]
8 months, 3 weeks ago
"B" says "..config of the application.." which is stored in the code. It does not say "environment variable". Therefore the correct answer is "C" since credentials are also stored in metadata server too. https://cloud.google.com/docs/authentication/application-default-credentials#attached-sa
upvoted 1 times
...
...
AaronLee
2 years ago
The Answer is C If the environment variable GOOGLE_APPLICATION_CREDENTIALS is set, ADC uses the service account key or configuration file that the variable points to. If the environment variable GOOGLE_APPLICATION_CREDENTIALS isn't set, ADC uses the service account that is attached to the resource that is running your code. https://cloud.google.com/docs/authentication/production#passing_the_path_to_the_service_account_key_in_code
upvoted 4 times
...
jj_618
2 years, 6 months ago
So is it B or C?
upvoted 1 times
StanPeng
2 years, 2 months ago
B for sure. C is wrong logic
upvoted 1 times
Ishu_awsguy
1 year, 2 months ago
C is the right answer. If the service account has read permissions to cloud storage. Nothing extra is needed
upvoted 1 times
...
Medofree
2 years ago
No the C is the right ans, you don't need to generate credentials into GCP since they are stored into metadata server, the application will retrieve them automatically through a Google Lib (or even manually by calling the url curl http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token -H "Metadata-Flavor: Google")
upvoted 3 times
...
...
...
bolu
3 years, 2 months ago
Answer can be either B or C due to the relevance to servicing account. But storing password in app is a worst practice and we read it several times everywhere online hence it results in C as a best answer to handle service account through metadata
upvoted 5 times
[Removed]
8 months, 3 weeks ago
Agreed. B recommends storing credentials in code (app config) which is never good practice. Option C is the most secure out of all the options presented. https://cloud.google.com/docs/authentication/application-default-credentials#attached-sa
upvoted 1 times
...
...
[Removed]
3 years, 5 months ago
Ans - C
upvoted 1 times
...
HectorLeon2099
3 years, 6 months ago
I'll go with B. A - ACL's are not able to allow access based on IP C - If you store the credentials in the metadata those will be public accessible by everyone with project access. D - Too complex
upvoted 6 times
saurabh1805
3 years, 5 months ago
Yes B is best possible option. This is something google also recommnd. https://cloud.google.com/storage/docs/authentication#libauth
upvoted 3 times
[Removed]
8 months, 3 weeks ago
B recommends storing credentials in code (app config) which is not recommended. Correct answer is C. Also metadata is different from metadata server. Metadata server is used to store service credentials for attached service accounts. https://cloud.google.com/docs/authentication/application-default-credentials#attached-sa
upvoted 1 times
...
gcpengineer
10 months, 3 weeks ago
google never recommend that
upvoted 3 times
...
...
...
CHECK666
3 years, 6 months ago
c is correct
upvoted 2 times
...

Question 16

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 16 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 16
Topic #: 1
[All Professional Cloud Security Engineer Questions]

An organization's typical network and security review consists of analyzing application transit routes, request handling, and firewall rules. They want to enable their developer teams to deploy new applications without the overhead of this full review.
How should you advise this organization?

  • A. Use Forseti with Firewall filters to catch any unwanted configurations in production.
  • B. Mandate use of infrastructure as code and provide static analysis in the CI/CD pipelines to enforce policies.
  • C. Route all VPC traffic through customer-managed routers to detect malicious patterns in production.
  • D. All production applications will run on-premises. Allow developers free rein in GCP as their dev and QA platforms.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
bluetaurianbull
Highly Voted 3 years ago
@TNT87 and others, if you say (B) or even (C) or (A) can you provide proof and URLs to support your claims. Simply saying if you have done Cloud Architect you will know Everything under the sun is not the proper response, this is a discussion and a community here trying to learn. Not everyone will be in same standard or level. Be helpful for others please....
upvoted 16 times
[Removed]
8 months, 3 weeks ago
Here you go for "B" https://www.terraform.io/use-cases/enforce-policy-as-code
upvoted 1 times
...
...
OSNG
Highly Voted 2 years, 7 months ago
Its B. Reasons: 1. They are asking for advise for Developers. (IaC is the suitable as they don't have to worry about managing infrastructure manually). Moreover "An organization’s typical network and security review consists of analyzing application transit routes, request handling, and firewall rules." statement is defining the process, they are not asking about the option to review the rules. Using Forseti is not reducing the overhead for Developers.
upvoted 10 times
...
ppandher
Most Recent 5 months, 3 weeks ago
They want to enable their developer teams to deploy new applications without the overhead of this full review - Questions says this . I am not sure if that feature is available in Forseti as per it, it is Inventory, Scanner, Explain, Enforce & Notification .
upvoted 1 times
...
[Removed]
8 months, 3 weeks ago
Selected Answer: B
The question emphasizes infrastructure related overhead. "B" is there only answer that addresses infrastructure overhead by leveraging infrastructure as code. Specifically the overhead is around security and policy concerns which are addressed by terraform in what they call "policy as code". https://www.terraform.io/use-cases/enforce-policy-as-code
upvoted 1 times
...
TonytheTiger
1 year, 4 months ago
B: the best answer. https://cloud.google.com/recommender/docs/tutorial-iac
upvoted 1 times
...
GCP72
1 year, 7 months ago
Selected Answer: B
The correct answer is B
upvoted 1 times
...
Jeanphi72
1 year, 8 months ago
Selected Answer: A
The problem I see with B is that there is no reason why reviews should disappear: IaC is code and code needs to be reviewed before being deployed. Depending on the companies, devops writing terraform / CDK are named developers as well. Forseti seems to be able to automate this: https://github.com/forseti-security/forseti-security/tree/master/samples/scanner/scanners
upvoted 1 times
...
szl0144
1 year, 10 months ago
I think B is the answer, can anybody explain why A is correct?
upvoted 1 times
badrik
1 year, 10 months ago
A is detective in nature while B is preventive. So, It's B !
upvoted 2 times
...
...
minostrozaml2
2 years, 2 months ago
Took the tesk today, only 5 question from this dump, the rest are new questions.
upvoted 2 times
...
ThisisJohn
2 years, 3 months ago
Selected Answer: B
My vote goes to B by discard. A) only mentions firewall rules, but nothing about network routes, and nothing on Forseti website either https://forsetisecurity.org/about/ C) Talks about malicious patterns, not about network routes, requests handling and patterns, like the question says D) Running on-prem doesn't guarantee a higher level of control Thus, the only answer that makes sense for me is B
upvoted 2 times
...
TNT87
3 years, 1 month ago
if you done Cloud Rchitect,you will understand why the answer is B
upvoted 4 times
bluetaurianbull
2 years, 11 months ago
its like saying if you have gone to space you experiance weighlessness .. be professional man... give proof for your claims, dont just expect world to be in same level as you. thats about COMMUNITY LEARNING ...
upvoted 10 times
TNT87
1 year ago
kkkkkkkkkkkkk then research than being angry
upvoted 1 times
...
...
...
[Removed]
3 years, 5 months ago
Ans - C
upvoted 2 times
[Removed]
3 years, 5 months ago
Sry(Typo) .. It's B
upvoted 2 times
...
...
saurabh1805
3 years, 5 months ago
I will also go with option A
upvoted 1 times
...
CHECK666
3 years, 6 months ago
B is the answer
upvoted 1 times
...
ownez
3 years, 7 months ago
Answer is B and not A because in A, the answer provided tells us the environment is in production where the question is about to enable their developer teams to deploy new applications without the overhead of the full review. Implementation of IAC is suitable for this. Answer is B.
upvoted 3 times
...
MohitA
3 years, 7 months ago
Yes B serves the purpose.
upvoted 2 times
...
aiwaai
3 years, 7 months ago
Answer is A
upvoted 1 times
...

Question 17

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 17 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 17
Topic #: 1
[All Professional Cloud Security Engineer Questions]

An employer wants to track how bonus compensations have changed over time to identify employee outliers and correct earning disparities. This task must be performed without exposing the sensitive compensation data for any individual and must be reversible to identify the outlier.
Which Cloud Data Loss Prevention API technique should you use to accomplish this?

  • A. Generalization
  • B. Redaction
  • C. CryptoHashConfig
  • D. CryptoReplaceFfxFpeConfig
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
xhova
Highly Voted 4 years, 6 months ago
Answer is D https://cloud.google.com/dlp/docs/pseudonymization
upvoted 17 times
smart123
4 years, 3 months ago
Option D is correct because it is reversible whereas option B is not.
upvoted 3 times
...
SilentSec
4 years, 2 months ago
Also the same usecase in the url that you post. D is right.
upvoted 1 times
...
...
gcp_learner
Highly Voted 4 years, 3 months ago
The answer is A. By bucketing or generalizing, we achieve a reversible pseudonymised data that can still yield the required analysis. https://cloud.google.com/dlp/docs/concepts-bucketing
upvoted 6 times
Sheeda
4 years, 1 month ago
Completely wrong The answer is D for sure. The example was even in google docs but replaced for some reasons. http://price2meet.com/gcp/docs/dlp_docs_pseudonymization.pdf
upvoted 7 times
...
...
crazycosmos
Most Recent 4 months, 1 week ago
Selected Answer: D
it is reversible for D
upvoted 1 times
...
ManuelY
5 months ago
Selected Answer: D
Reversible
upvoted 1 times
...
Kiroo
6 months ago
Selected Answer: D
For sure is D https://cloud.google.com/sensitive-data-protection/docs/transformations-reference#fpe I was in doubt about C but the hash can’t be returned into the original value
upvoted 1 times
...
ketoza
9 months, 1 week ago
Selected Answer: D
https://cloud.google.com/dlp/docs/transformations-reference#fpe
upvoted 1 times
...
okhascorpio
11 months, 4 weeks ago
A. seems like good fit here. Preserve data utility while also reducing the identifiability of the data. https://cloud.google.com/dlp/docs/concepts-bucketing
upvoted 1 times
okhascorpio
11 months, 4 weeks ago
I take it back. its not reversible.
upvoted 1 times
...
...
[Removed]
1 year, 2 months ago
Selected Answer: D
The keyword here is "reversible" or allows for "re-identification". Out of the options listed, Format preserving encryption (FPE-FFX) is the only one that allows "re-identification". Therefore "D" is the most accurate option. References: https://cloud.google.com/dlp/docs/pseudonymization (see the table) https://en.wikipedia.org/wiki/Format-preserving_encryption
upvoted 2 times
...
aashissh
1 year, 5 months ago
Selected Answer: A
Generalization is a technique that replaces an original value with a similar, but not identical, value. This technique can be used to help protect sensitive data while still allowing statistical analysis. In this scenario, the employer can use generalization to replace the actual bonus compensation values with generalized values that are statistically similar but not identical. This allows the employer to perform analysis on the data without exposing the sensitive compensation data for any individual employee. Using Generalization can be reversible to identify outliers. The employer can then use the original data to investigate further and correct any earning disparities. Redaction is another DLP API technique that can be used to protect sensitive data, but it is not suitable for this scenario since it would remove the data completely and make statistical analysis impossible. CryptoHashConfig and CryptoReplaceFfxFpeConfig are also not suitable for this scenario since they are encryption techniques and do not allow statistical analysis of data.
upvoted 3 times
...
aashissh
1 year, 5 months ago
Answer is A: Generalization is a technique that replaces an original value with a similar, but not identical, value. This technique can be used to help protect sensitive data while still allowing statistical analysis. In this scenario, the employer can use generalization to replace the actual bonus compensation values with generalized values that are statistically similar but not identical. This allows the employer to perform analysis on the data without exposing the sensitive compensation data for any individual employee. Using Generalization can be reversible to identify outliers. The employer can then use the original data to investigate further and correct any earning disparities. Redaction is another DLP API technique that can be used to protect sensitive data, but it is not suitable for this scenario since it would remove the data completely and make statistical analysis impossible. CryptoHashConfig and CryptoReplaceFfxFpeConfig are also not suitable for this scenario since they are encryption techniques and do not allow statistical analysis of data.
upvoted 1 times
...
Lyfedge
1 year, 6 months ago
Correct Answer is (D): De-identifying sensitive data Cloud Data Loss Prevention (DLP) can de-identify sensitive data in text content, including text stored in container structures such as tables. De-identification is the process of removing identifying information from data. The API detects sensitive data such as personally identifiable information (PII), and then uses a de-identification transformation to mask, delete, or otherwise obscure the data. For example, de-identification techniques can include any of the following: Masking sensitive data by partially or fully replacing characters with a symbol, such as an asterisk (*) or hash (#).
upvoted 1 times
...
mahi9
1 year, 7 months ago
Selected Answer: D
D is the most viable option
upvoted 1 times
...
null32sys
1 year, 7 months ago
The Answer is A
upvoted 1 times
...
Ishu_awsguy
1 year, 8 months ago
Correct answer is D. But, The answer does not have a CryptoDeterministicConfig . We recommend using CryptoDeterministicConfig for all use cases which do not require preserving the input alphabet space and size, plus warrant referential integrity. https://cloud.google.com/dlp/docs/transformations-reference#transformation_methods
upvoted 1 times
...
zanhsieh
1 year, 9 months ago
Answer D. Note that `CryptoReplaceFfxFpeConfig` might not be used in a real exam; they might change to `format preserve encryption`.
upvoted 5 times
...
Littleivy
1 year, 11 months ago
The answer is D https://cloud.google.com/dlp/docs/transformations-reference#transformation_methods
upvoted 2 times
...
Premumar
1 year, 11 months ago
Selected Answer: D
D is the only option that is reversible.
upvoted 3 times
...

Question 18

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 18 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 18
Topic #: 1
[All Professional Cloud Security Engineer Questions]

An organization adopts Google Cloud Platform (GCP) for application hosting services and needs guidance on setting up password requirements for their Cloud
Identity account. The organization has a password policy requirement that corporate employee passwords must have a minimum number of characters.
Which Cloud Identity password guidelines can the organization use to inform their new requirements?

  • A. Set the minimum length for passwords to be 8 characters.
  • B. Set the minimum length for passwords to be 10 characters.
  • C. Set the minimum length for passwords to be 12 characters.
  • D. Set the minimum length for passwords to be 6 characters.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
bolu
Highly Voted 4 years, 2 months ago
The situation changes year on year on GCP.Right now the right answer is C based on minimum requirement of 12 char in GCP as on Jan 2021. https://support.google.com/accounts/answer/32040?hl=en#zippy=%2Cmake-your-password-longer-more-memorable
upvoted 19 times
desertlotus1211
4 years ago
It asked for Cloud Indentity password requirements... Minimum is 8 Maximum is 100
upvoted 9 times
...
...
KILLMAD
Highly Voted 5 years, 1 month ago
Ans is A
upvoted 12 times
rafaelc
5 years ago
Default password length is 8 characters. https://support.google.com/cloudidentity/answer/33319?hl=en
upvoted 11 times
lolanczos
1 month, 1 week ago
That page is about the default for the form, not the recommended best practice.
upvoted 1 times
...
...
...
Rakesh21
Most Recent 2 months, 2 weeks ago
Selected Answer: D
Follow the Google Cloud Documentation at https://cloud.google.com/identity-platform/docs/password-policy
upvoted 1 times
lolanczos
1 month, 1 week ago
That link says absolutely NOTHING about the recommended length.
upvoted 1 times
...
...
dlenehan
3 months, 3 weeks ago
Selected Answer: C
Password advice changes, latest (Dec 2024) is 12 chars: https://support.google.com/accounts/answer/32040?hl=en#zippy=%2Cmake-your-password-longer-more-memorable
upvoted 1 times
...
Ademobi
4 months ago
Selected Answer: A
The correct answer is A. Set the minimum length for passwords to be 8 characters. According to Google Cloud Identity's password guidelines, the minimum password length is 8 characters. This is a default setting that can be adjusted to meet the organization's specific requirements. Here's a quote from the Google Cloud Identity documentation: "The minimum password length is 8 characters. You can adjust this setting to meet your organization's password policy requirements." Therefore, option A is the correct answer.
upvoted 1 times
...
BPzen
4 months, 4 weeks ago
Selected Answer: B
The most accurate answer based on Cloud Identity's password guidelines is B. Set the minimum length for passwords to be 10 characters. While Cloud Identity allows you to set a minimum password length as low as 6 characters, Google recommends a minimum of 10 characters for stronger security. This aligns with industry best practices for password security. Here's why the other options are not the best advice: A. 8 characters: While better than 6, it's still shorter than the recommended minimum. C. 12 characters: While this is a strong password length, it might be unnecessarily long for some organizations and could lead to user frustration. D. 6 characters: This is generally considered too short for a secure password in modern environments.
upvoted 1 times
...
pico
11 months ago
Selected Answer: D
Minimum is 6 https://cloud.google.com/identity-platform/docs/password-policy
upvoted 3 times
...
dija123
1 year ago
Selected Answer: A
Minimum 8
upvoted 1 times
...
madcloud32
1 year, 1 month ago
Selected Answer: C
12 is minimum good for app security.
upvoted 1 times
...
[Removed]
1 year, 8 months ago
Selected Answer: A
"A" By default the minimum number of characters is 8 (max 100) however range can be adjusted. https://support.google.com/a/answer/139399?sjid=18255262015630288726-NA
upvoted 2 times
...
amanshin
1 year, 9 months ago
Answer is A The minimum password length for application hosting services on GCP was 12 characters until January 2023. However, it was recently changed to 8 characters. This change was made to make it easier for users to create and remember strong passwords.
upvoted 1 times
...
Sachu555
2 years ago
C is the correct ans
upvoted 1 times
...
Sammydp202020
2 years, 2 months ago
Selected Answer: A
Answer is A
upvoted 1 times
...
blue123456
2 years, 4 months ago
Ans A https://support.google.com/cloudidentity/answer/2537800?hl=en#zippy=%2Creset-a-users-password
upvoted 2 times
...
xchmielu
2 years, 4 months ago
Selected Answer: C
https://support.google.com/accounts/answer/32040?hl=en#zippy=%2Cmake-your-password-longer-more-memorable
upvoted 1 times
...
GCP72
2 years, 7 months ago
Selected Answer: A
The answer is A
upvoted 1 times
...
otokichi3
2 years, 10 months ago
The answer is A. minimum character length is 8. https://support.google.com/cloudidentity/answer/139399?hl=en
upvoted 1 times
...

Question 19

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 19 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 19
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You need to follow Google-recommended practices to leverage envelope encryption and encrypt data at the application layer.
What should you do?

  • A. Generate a data encryption key (DEK) locally to encrypt the data, and generate a new key encryption key (KEK) in Cloud KMS to encrypt the DEK. Store both the encrypted data and the encrypted DEK.
  • B. Generate a data encryption key (DEK) locally to encrypt the data, and generate a new key encryption key (KEK) in Cloud KMS to encrypt the DEK. Store both the encrypted data and the KEK.
  • C. Generate a new data encryption key (DEK) in Cloud KMS to encrypt the data, and generate a key encryption key (KEK) locally to encrypt the key. Store both the encrypted data and the encrypted DEK.
  • D. Generate a new data encryption key (DEK) in Cloud KMS to encrypt the data, and generate a key encryption key (KEK) locally to encrypt the key. Store both the encrypted data and the KEK.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Sheeda
Highly Voted 4 years, 1 month ago
Yes, A is correct The process of encrypting data is to generate a DEK locally, encrypt data with the DEK, use a KEK to wrap the DEK, and then store the encrypted data and the wrapped DEK. The KEK never leaves Cloud KMS.
upvoted 22 times
MohitA
4 years, 1 month ago
Agree on A, spot on "KEK never leaves Cloud KMS"
upvoted 3 times
...
...
Di4sa
Most Recent 7 months, 3 weeks ago
Selected Answer: A
A is the correct answer as stated in google docs The process of encrypting data is to generate a DEK locally, encrypt data with the DEK, use a KEK to wrap the DEK, and then store the encrypted data and the wrapped DEK. The KEK never leaves Cloud KMS. https://cloud.google.com/kms/docs/envelope-encryption#how_to_encrypt_data_using_envelope_encryption
upvoted 2 times
...
standm
1 year, 5 months ago
KMS is used for storing KEK in CSEK & CMEK
upvoted 1 times
...
aashissh
1 year, 5 months ago
Selected Answer: B
This follows the recommended practice of envelope encryption, where the DEK is encrypted with a KEK, which is managed by a KMS service such as Cloud KMS. Storing both the encrypted data and the KEK allows for the data to be decrypted using the KEK when needed. It's important to generate the DEK locally to ensure the security of the key, and to generate a new KEK in Cloud KMS for added security and key management capabilities.
upvoted 1 times
ppandher
11 months, 3 weeks ago
We need to store the encrypted data and Wrapped DEK . KEK would be centrally Managed by KMS . https://cloud.google.com/kms/docs/envelope-encryption#how_to_encrypt_data_using_envelope_encryption
upvoted 1 times
...
...
GCP72
2 years, 1 month ago
Selected Answer: A
The answer is A
upvoted 2 times
...
minostrozaml2
2 years, 9 months ago
Took the tesk today, only 5 question from this dump, the rest are new questions.
upvoted 1 times
...
Bill831231
2 years, 10 months ago
A sounds like the correct answer: https://cloud.google.com/kms/docs/envelope-encryption#how_to_encrypt_data_using_envelope_encryption
upvoted 1 times
...
umashankar_a
3 years, 3 months ago
Answer A Envelope Encryption: https://cloud.google.com/kms/docs/envelope-encryption Here are best practices for managing DEKs: -Generate DEKs locally. -When stored, always ensure DEKs are encrypted at rest. - For easy access, store the DEK near the data that it encrypts. The DEK is encrypted (also known as wrapped) by a key encryption key (KEK). The process of encrypting a key with another key is known as envelope encryption. Here are best practices for managing KEKs: -Store KEKs centrally. (KMS ) -Set the granularity of the DEKs they encrypt based on their use case. For example, consider a workload that requires multiple DEKs to encrypt the workload's data chunks. You could use a single KEK to wrap all DEKs that are responsible for that workload's encryption. -Rotate keys regularly, and also after a suspected incident.
upvoted 2 times
...
desertlotus1211
3 years, 5 months ago
I'm no sure what the answers is, but the answers to this question has changed.... be prepared
upvoted 1 times
...
dtmtor
3 years, 6 months ago
Answer is A
upvoted 1 times
...
DebasishLowes
3 years, 6 months ago
Ans : A
upvoted 1 times
...
CloudTrip
3 years, 7 months ago
Correction I change it to A after reading the question once again.
upvoted 1 times
...
CloudTrip
3 years, 8 months ago
Answer is B as after DEK encryption it's KEK (not encrypted DEK) which never leaves KMS
upvoted 1 times
...
Bharathy
3 years, 10 months ago
A - Envelope Encryption ( DEK - to encrypt the data, KEK - encrypt the DEK , KEK resides in KMS and only the encrypted data and wrapped DEK will be stored back )
upvoted 2 times
...
[Removed]
3 years, 11 months ago
Ans - A https://cloud.google.com/kms/docs/envelope-encryption#how_to_encrypt_data_using_envelope_encryption
upvoted 1 times
...
CHECK666
4 years ago
The answer is A
upvoted 1 times
...
aiwaai
4 years, 1 month ago
The Answer is A
upvoted 2 times
...

Question 20

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 20 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 20
Topic #: 1
[All Professional Cloud Security Engineer Questions]

How should a customer reliably deliver Stackdriver logs from GCP to their on-premises SIEM system?

  • A. Send all logs to the SIEM system via an existing protocol such as syslog.
  • B. Configure every project to export all their logs to a common BigQuery DataSet, which will be queried by the SIEM system.
  • C. Configure Organizational Log Sinks to export logs to a Cloud Pub/Sub Topic, which will be sent to the SIEM via Dataflow.
  • D. Build a connector for the SIEM to query for all logs in real time from the GCP RESTful JSON APIs.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
ESP_SAP
Highly Voted 3 years, 4 months ago
Correct answer is (C): Scenarios for exporting Cloud Logging data: Splunk This scenario shows how to export selected logs from Cloud Logging to Pub/Sub for ingestion into Splunk. Splunk is a security information and event management (SIEM) solution that supports several ways of ingesting data, such as receiving streaming data out of Google Cloud through Splunk HTTP Event Collector (HEC) or by fetching data from Google Cloud APIs through Splunk Add-on for Google Cloud. Using the Pub/Sub to Splunk Dataflow template, you can natively forward logs and events from a Pub/Sub topic into Splunk HEC. If Splunk HEC is not available in your Splunk deployment, you can use the Add-on to collect the logs and events from the Pub/Sub topic. https://cloud.google.com/solutions/exporting-stackdriver-logging-for-splunk
upvoted 18 times
AzureDP900
1 year, 5 months ago
I will go with C
upvoted 1 times
...
...
bkovari
Most Recent 8 months ago
C is the only way to go
upvoted 2 times
...
GCP72
1 year, 7 months ago
Selected Answer: C
I will go with C
upvoted 4 times
...
DebasishLowes
3 years ago
Ans : C
upvoted 2 times
...
BlahBaller
3 years, 2 months ago
As I was the Logging Service Manager when we set this up with GCP. I can verify that C is how we have it setup, based on the Google's recommendations.
upvoted 2 times
...
Moss2011
3 years, 5 months ago
I think the correct one its D because C mention "Dataflow" and it cannot be connected to any sink out of GCP.
upvoted 1 times
...
[Removed]
3 years, 5 months ago
Ans - C https://cloud.google.com/solutions/exporting-stackdriver-logging-for-splunk
upvoted 2 times
...
deevisrk
3 years, 5 months ago
C looks correct.. https://cloud.google.com/solutions/exporting-stackdriver-logging-for-splunk Splunk is on premises SIEM solution in above example.
upvoted 2 times
...
saurabh1805
3 years, 5 months ago
I will go with Option B. Read this email for more reason. C is not workable solution so that is first one not to consider.
upvoted 1 times
...
CHECK666
3 years, 6 months ago
C is the answer.
upvoted 1 times
...
ArizonaClassics
3 years, 8 months ago
I will go with C
upvoted 3 times
...
xhova
4 years ago
C is correct
upvoted 4 times
...

Question 21

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 21 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 21
Topic #: 1
[All Professional Cloud Security Engineer Questions]

In order to meet PCI DSS requirements, a customer wants to ensure that all outbound traffic is authorized.
Which two cloud offerings meet this requirement without additional compensating controls? (Choose two.)

  • A. App Engine
  • B. Cloud Functions
  • C. Compute Engine
  • D. Google Kubernetes Engine
  • E. Cloud Storage
Show Suggested Answer Hide Answer
Suggested Answer: CD 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
KILLMAD
Highly Voted 5 years, 1 month ago
Answer is CD because the doc mentions the following: "App Engine ingress firewall rules are available, but egress rules are not currently available:" and "Compute Engine and GKE are the preferred alternatives."
upvoted 18 times
rafaelc
5 years ago
It is CD. App Engine ingress firewall rules are available, but egress rules are not currently available. Per requirements 1.2.1 and 1.3.4, you must ensure that all outbound traffic is authorized. SAQ A-EP and SAQ D–type merchants must provide compensating controls or use a different Google Cloud product. Compute Engine and GKE are the preferred alternatives. https://cloud.google.com/solutions/pci-dss-compliance-in-gcp
upvoted 7 times
...
...
BPzen
Most Recent 4 months, 3 weeks ago
Selected Answer: AB
PCI DSS (Payment Card Industry Data Security Standard) requires strict control over outbound traffic, meaning that only explicitly authorized traffic is allowed to leave the environment. Both App Engine and Cloud Functions are fully managed serverless platforms where Google handles the network configuration, including restrictions on outbound connections. Outbound traffic in these environments can be controlled without additional compensating controls because Google ensures compliance by managing the network restrictions and underlying infrastructure.
upvoted 3 times
...
luamail78
6 months ago
Selected Answer: AB
While the older answers (CD) were correct based on previous limitations, App Engine now supports egress controls.This means you can configure rules to manage outbound traffic, making it suitable for meeting PCI DSS requirements without needing extra compensating controls.
upvoted 1 times
...
Kiroo
1 year ago
Selected Answer: AB
Today this question does not have an specific answer it seems that compute engine and gke wound need additional steps to setup and functions and app engine it’s possible to just set egress so I would go with this pair
upvoted 2 times
...
techdsmart
1 year, 2 months ago
AB With App Engine, you can ingress firewall rules and egress traffic controls . You can use Cloud Functions ingress and egress network settings. AB makes sense if we are talking about controlling ingress and egress traffic
upvoted 3 times
...
rottzy
1 year, 6 months ago
have a look 👀 at https://cloud.google.com/security/compliance/pci-dss#:~:text=The%20scope%20of%20the%20PCI,products%20against%20the%20PCI%20DSS. there are multiple answers!
upvoted 1 times
...
GCBC
1 year, 7 months ago
Ans is CD as per google docs - https://cloud.google.com/architecture/pci-dss-compliance-in-gcp#securing_your_network
upvoted 1 times
...
standm
1 year, 11 months ago
CD - since both support Egress firewalls.
upvoted 1 times
...
mahi9
2 years, 1 month ago
Selected Answer: CD
The most viable options
upvoted 1 times
...
civilizador
2 years, 2 months ago
Answer is CD: https://cloud.google.com/architecture/pci-dss-compliance-in-gcp#securing_your_network Securing your network To secure inbound and outbound traffic to and from your payment-processing app network, you need to create the following: Compute Engine firewall rules A Compute Engine virtual private network (VPN) tunnel A Compute Engine HTTPS load balancer For creating your VPC, we recommend Cloud NAT for an additional layer of network security. There are many powerful options available to secure networks of both Compute Engine and GKE instances.
upvoted 1 times
...
GCParchitect2022
2 years, 3 months ago
Selected Answer: AD
Document updated. AD "App Engine ingress firewall rules and egress traffic controls" https://cloud.google.com/architecture/pci-dss-compliance-in-gcp#product_guidance
upvoted 4 times
...
Brosh
2 years, 3 months ago
hey. can anyone explain why isn't A correct? the decumantion mentions app engine as an option but not compute engine https://cloud.google.com/architecture/pci-dss-compliance-in-gcp
upvoted 2 times
deony
1 year, 10 months ago
IMO, this question was posted in 2020. and later, Google released egress control for serverless VPC. so currently App engine also are compliant in PCI. I think this question is outdated
upvoted 4 times
deony
1 year, 10 months ago
https://cloud.google.com/blog/products/serverless/app-engine-egress-controls-and-user-managed-service-accounts?hl=en
upvoted 1 times
...
...
...
Littleivy
2 years, 5 months ago
Selected Answer: CD
Answer is CD For App Engine, the App Engine firewall only applies to incoming traffic routed to your app or service. https://cloud.google.com/appengine/docs/flexible/understanding-firewalls
upvoted 3 times
[Removed]
1 year, 8 months ago
This comment clearly explains why A is not correct. Therefore the correct answer is C,D
upvoted 1 times
...
...
AzureDP900
2 years, 5 months ago
CD is right
upvoted 1 times
...
GCP72
2 years, 7 months ago
Selected Answer: CD
The correct answer is CD
upvoted 1 times
...
jordi_194
3 years, 2 months ago
Selected Answer: CD
Ans: CD
upvoted 2 times
...
DebasishLowes
4 years ago
Ans : CD
upvoted 1 times
...

Question 23

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 23 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 23
Topic #: 1
[All Professional Cloud Security Engineer Questions]

When working with agents in the support center via online chat, your organization's customers often share pictures of their documents with personally identifiable information (PII). Your leadership team is concerned that this PII is being stored as part of the regular chat logs, which are reviewed by internal or external analysts for customer service trends.
You want to resolve this concern while still maintaining data utility. What should you do?

  • A. Use Cloud Key Management Service to encrypt PII shared by customers before storing it for analysis.
  • B. Use Object Lifecycle Management to make sure that all chat records containing PII are discarded and not saved for analysis.
  • C. Use the image inspection and redaction actions of the DLP API to redact PII from the images before storing them for analysis.
  • D. Use the generalization and bucketing actions of the DLP API solution to redact PII from the texts before storing them for analysis.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
jitu028
Highly Voted 2 years ago
Answer is C
upvoted 5 times
...
dija123
Most Recent 6 months, 3 weeks ago
Selected Answer: C
Agree with C
upvoted 1 times
...
standm
1 year, 5 months ago
since D talks about 'Text' and not image - it is not a suitable answer I guess.
upvoted 2 times
...
shayke
1 year, 9 months ago
Selected Answer: C
C the q refers to imaging
upvoted 4 times
...
kamal17
1 year, 10 months ago
Answer is C
upvoted 2 times
...

Question 24

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 24 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 24
Topic #: 1
[All Professional Cloud Security Engineer Questions]

A company's application is deployed with a user-managed Service Account key. You want to use Google-recommended practices to rotate the key.
What should you do?

  • A. Open Cloud Shell and run gcloud iam service-accounts enable-auto-rotate --iam-account=IAM_ACCOUNT.
  • B. Open Cloud Shell and run gcloud iam service-accounts keys rotate --iam-account=IAM_ACCOUNT --key=NEW_KEY.
  • C. Create a new key, and use the new key in the application. Delete the old key from the Service Account.
  • D. Create a new key, and use the new key in the application. Store the old key on the system as a backup key.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
mdc
Highly Voted 3 years, 4 months ago
C is correct. As explained, You can rotate a key by creating a new key, updating applications to use the new key, and deleting the old key. Use the serviceAccount.keys.create() method and serviceAccount.keys.delete() method together to automate the rotation. https://cloud.google.com/iam/docs/creating-managing-service-account-keys#deleting_service_account_keys
upvoted 11 times
...
aliounegdiop
Most Recent 1 year, 1 month ago
B is correct. for C creating a new key and deleting the old one from the Service Account, is not recommended. Deleting the old key without replacing it could prevent your application from authenticating and accessing resources.
upvoted 1 times
aliounegdiop
1 year, 1 month ago
my bad it should D. having a backup key in cae of problem with the new key
upvoted 1 times
eeghai7thioyaiR4
5 months, 2 weeks ago
If you keep the old key active, then your rotate is worthless (because anyone could still use the old key) C is the solution: rotate and destroy the previous key
upvoted 3 times
...
...
...
[Removed]
1 year, 2 months ago
Selected Answer: C
"C" appears to be the most accurate. https://cloud.google.com/iam/docs/key-rotation#process
upvoted 3 times
...
[Removed]
1 year, 2 months ago
"C" appears to be the most accurate. https://cloud.google.com/iam/docs/key-rotation
upvoted 2 times
[Removed]
1 year, 2 months ago
Specifically: https://cloud.google.com/iam/docs/key-rotation#process
upvoted 1 times
...
...
megalucio
1 year, 3 months ago
Selected Answer: C
C it is the ans
upvoted 1 times
...
amanshin
1 year, 3 months ago
The correct answer is C. Create a new key, and use the new key in the application. Delete the old key from the Service Account. Google recommends that you rotate user-managed service account keys every 90 days or less. This helps to reduce the risk of unauthorized access to your resources if the key is compromised.
upvoted 1 times
...
gcpengineer
1 year, 4 months ago
Selected Answer: C
C is the ans
upvoted 1 times
gcpengineer
1 year, 4 months ago
https://cloud.google.com/iam/docs/best-practices-for-managing-service-account-keys#rotate-keys
upvoted 1 times
...
...
aashissh
1 year, 5 months ago
Selected Answer: D
The recommended practice to rotate a user-managed Service Account key in GCP is to create a new key and use it in the application while keeping the old key for a specified period as a backup key. This helps to ensure that the application's service account always has a valid key and that there is no service disruption during the key rotation process. Therefore, the correct answer is option D.
upvoted 3 times
...
GCP72
2 years, 1 month ago
Selected Answer: C
The correct answer is C
upvoted 2 times
...
absipat
2 years, 4 months ago
c of course
upvoted 1 times
...
DebasishLowes
3 years, 6 months ago
Ans : C
upvoted 2 times
...
[Removed]
3 years, 11 months ago
Ans - C https://cloud.google.com/iam/docs/understanding-service-accounts#managing_service_account_keys
upvoted 4 times
...
ArizonaClassics
4 years, 1 month ago
C is the right choice for me
upvoted 4 times
...
aiwaai
4 years, 1 month ago
Correct Answer: C
upvoted 2 times
...

Question 25

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 25 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 25
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your team needs to configure their Google Cloud Platform (GCP) environment so they can centralize the control over networking resources like firewall rules, subnets, and routes. They also have an on-premises environment where resources need access back to the GCP resources through a private VPN connection.
The networking resources will need to be controlled by the network security team.
Which type of networking design should your team use to meet these requirements?

  • A. Shared VPC Network with a host project and service projects
  • B. Grant Compute Admin role to the networking team for each engineering project
  • C. VPC peering between all engineering projects using a hub and spoke model
  • D. Cloud VPN Gateway between all engineering projects using a hub and spoke model
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Reference:
https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-organizations#centralize_network_control

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
ArizonaClassics
Highly Voted 2 years, 8 months ago
I agree with A Centralize network control: Use Shared VPC to connect to a common VPC network. Resources in those projects can communicate with each other securely and efficiently across project boundaries using internal IPs. You can manage shared network resources, such as subnets, routes, and firewalls, from a central host project, enabling you to apply and enforce consistent network policies across the projects.
upvoted 19 times
ArizonaClassics
2 years, 8 months ago
WATCH: https://www.youtube.com/watch?v=WotV3D01tJA READ: https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-organizations#centralize_network_control
upvoted 5 times
...
...
Sheeda
Highly Voted 2 years, 8 months ago
I believe the answer is D. How can shared VPC give access to your on premise environment ? A seems wrong to me.
upvoted 5 times
AkbarM
6 months, 3 weeks ago
I also believe the same. i worked on interconnects and gateways to connect on prem resources.. only hub and spoke helps to connect onpremise network. ofcourse, we can centralize network controls using shared vpc. but the need here is some engineerng resources in on prem needs to access gcp resources. so this needs gateway to access gcp resources.
upvoted 2 times
...
...
kamal17
Most Recent 4 months ago
Answer is D , bocz On-prime user needs to access the GCP resources with help of Cloud VPN
upvoted 2 times
...
GCP72
7 months, 2 weeks ago
Selected Answer: A
The correct answer is A
upvoted 1 times
...
minostrozaml2
1 year, 2 months ago
Took the tesk today, only 5 question from this dump, the rest are new questions.
upvoted 1 times
...
ZODOGAM
1 year, 4 months ago
Sheeda En mi caso te confirmo que desde la share VPC se establecen las VPNs y allí ingresa el tráfico desde los sitios locales. Definitivamente, la respuesta es la A
upvoted 1 times
...
DebasishLowes
2 years ago
Ans : A. It will be shared VPC as it is asking for centralized network control.
upvoted 1 times
...
jonclem
2 years, 5 months ago
Option D is incorrect and a violation of Google's Service Specific terms as per : https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview I'd go with option A myself.
upvoted 1 times
...
[Removed]
2 years, 5 months ago
Ans - A
upvoted 1 times
...
saurabh1805
2 years, 5 months ago
A, this is exact reason to use shared VPC
upvoted 1 times
...
CHECK666
2 years, 6 months ago
A is the answer.
upvoted 1 times
...
Akku1614
2 years, 7 months ago
A is correct as Shared VPC provides us with Centralized control however VPC Peering is a decentralized option.
upvoted 1 times
...
aiwaai
2 years, 7 months ago
Correct Answer: A
upvoted 1 times
...
Sheeda
2 years, 8 months ago
Connect your enterprise network Many enterprises need to connect existing on-premises infrastructure with their Google Cloud resources. Evaluate your bandwidth, latency, and SLA requirements to choose the best connection option: If you need low-latency, highly available, enterprise-grade connections that enable you to reliably transfer data between your on-premises and VPC networks without traversing the internet connections to Google Cloud, use Cloud Interconnect: Dedicated Interconnect provides a direct physical connection between your on-premises network and Google's network. Partner Interconnect provides connectivity between your on-premises and Google Cloud VPC networks through a supported service provider. If you don't require the low latency and high availability of Cloud Interconnect, or you are just starting on your cloud journey, use Cloud VPN to set up encrypted IPsec VPN tunnels between your on-premises network and VPC. Compared to a direct, private connection, an IPsec VPN tunnel has lower overhead and costs.
upvoted 1 times
ESP_SAP
2 years, 4 months ago
you Should go back to the GCP Cloud Architect concepts or GCP Networking!
upvoted 2 times
...
ArizonaClassics
2 years, 7 months ago
Sheeda you need to read and understand the the question.
upvoted 1 times
ArizonaClassics
2 years, 7 months ago
They are asking how you can centralize the control over networking resources like firewall rules, subnets, and routes. watch this: https://www.youtube.com/watch?v=WotV3D01tJA you will see that you can also manage vpn connections as well
upvoted 1 times
...
...
...

Question 26

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 26 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 26
Topic #: 1
[All Professional Cloud Security Engineer Questions]

An organization is migrating from their current on-premises productivity software systems to G Suite. Some network security controls were in place that were mandated by a regulatory body in their region for their previous on-premises system. The organization's risk team wants to ensure that network security controls are maintained and effective in G Suite. A security architect supporting this migration has been asked to ensure that network security controls are in place as part of the new shared responsibility model between the organization and Google Cloud.
What solution would help meet the requirements?

  • A. Ensure that firewall rules are in place to meet the required controls.
  • B. Set up Cloud Armor to ensure that network security controls can be managed for G Suite.
  • C. Network security is a built-in solution and Google's Cloud responsibility for SaaS products like G Suite.
  • D. Set up an array of Virtual Private Cloud (VPC) networks to control network security as mandated by the relevant regulation.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
ESP_SAP
Highly Voted 3 years, 10 months ago
Correct Answer is (C): GSuite is Saas application. Shared responsibility “Security of the Cloud” - GCP is responsible for protecting the infrastructure that runs all of the services offered in the GCP Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run GCP Cloud services.
upvoted 11 times
AzureDP900
1 year, 11 months ago
C is right
upvoted 2 times
...
...
Topsy
Highly Voted 3 years, 9 months ago
Answer is C- Review this Youtube Video- https://www.youtube.com/watch?v=D2zf0SgNdUw, scroll to 7:55, it would show you the Shared Responsibility model- With Gsuite being a SaaS product, Network Security is handled by Google
upvoted 7 times
...
okhascorpio
Most Recent 7 months, 3 weeks ago
This thread suggests option "D" to be the only viable option. Now what ?? https://www.exam-answer.com/migrating-to-gsuite-network-security-controls
upvoted 1 times
...
[Removed]
1 year, 2 months ago
Selected Answer: C
GSuite AKA Workspace is software as a service where the SAAS provider (Google) is responsible for all underlying security. https://youtu.be/D2zf0SgNdUw?t=535
upvoted 2 times
...
ppandey96
1 year, 6 months ago
Selected Answer: C
https://www.checkpoint.com/cyber-hub/cloud-security/what-is-google-cloud-platform-gcp-security/top-7-google-cloud-platform-gcp-security-best-practices/
upvoted 1 times
...
alleinallein
1 year, 6 months ago
Selected Answer: B
Shared responsibility model. Network security is not only Google's responsibility. As easy as that.
upvoted 1 times
alleinallein
1 year, 6 months ago
Need to change, as above if Google Workspace is considered as a Saas then network security is the responsibility of provider. C is correct.
upvoted 2 times
...
Appsec977
1 year, 4 months ago
How would you set up a cloud armor in google workspace? totally misleading answer.
upvoted 3 times
...
...
shayke
1 year, 9 months ago
Selected Answer: C
c - SAAS network security is the responsible of the cloud provider
upvoted 1 times
...
absipat
2 years, 4 months ago
c of course
upvoted 1 times
...
absipat
2 years, 4 months ago
C as it is SAAs
upvoted 1 times
...
FatCharlie
3 years, 10 months ago
Except for C, none of the options are possible in G Suite. There are no firewall, VPC, or Cloud Armor options there as far as I know.
upvoted 4 times
...
[Removed]
3 years, 11 months ago
Ans - A
upvoted 2 times
...
saurabh1805
3 years, 11 months ago
Question is asking for Network security group, Hence i will go with Option A
upvoted 1 times
...
skshak
4 years ago
Answer is C. Gsuite is SaaS
upvoted 2 times
...

Question 27

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 27 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 27
Topic #: 1
[All Professional Cloud Security Engineer Questions]

A customer's company has multiple business units. Each business unit operates independently, and each has their own engineering group. Your team wants visibility into all projects created within the company and wants to organize their Google Cloud Platform (GCP) projects based on different business units. Each business unit also requires separate sets of IAM permissions.
Which strategy should you use to meet these needs?

  • A. Create an organization node, and assign folders for each business unit.
  • B. Establish standalone projects for each business unit, using gmail.com accounts.
  • C. Assign GCP resources in a project, with a label identifying which business unit owns the resource.
  • D. Assign GCP resources in a VPC for each business unit to separate network access.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
ArizonaClassics
Highly Voted 3 years, 8 months ago
I will go with A Refer to: https://cloud.google.com/resource-manager/docs/listing-all-resources Also: https://wideops.com/mapping-your-organization-with-the-google-cloud-platform-resource-hierarchy/
upvoted 18 times
...
[Removed]
Most Recent 8 months, 3 weeks ago
Selected Answer: A
"A" Here's a blog post articulating this very business case. https://cloud.google.com/blog/products/gcp/mapping-your-organization-with-the-google-cloud-platform-resource-hierarchy
upvoted 1 times
...
shayke
1 year, 3 months ago
Selected Answer: A
A is the right ans - resource manager
upvoted 1 times
...
DebasishLowes
3 years, 1 month ago
Ans - A
upvoted 3 times
...
[Removed]
3 years, 5 months ago
Ans - A
upvoted 1 times
...
aiwaai
3 years, 7 months ago
Correct Answer: A
upvoted 1 times
...

Question 28

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 28 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 28
Topic #: 1
[All Professional Cloud Security Engineer Questions]

A company has redundant mail servers in different Google Cloud Platform regions and wants to route customers to the nearest mail server based on location.
How should the company accomplish this?

  • A. Configure TCP Proxy Load Balancing as a global load balancing service listening on port 995.
  • B. Create a Network Load Balancer to listen on TCP port 995 with a forwarding rule to forward traffic based on location.
  • C. Use Cross-Region Load Balancing with an HTTP(S) load balancer to route traffic to the nearest region.
  • D. Use Cloud CDN to route the mail traffic to the closest origin mail server based on client IP address.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
ESP_SAP
Highly Voted 4 years, 4 months ago
Corrrect Answer is (A): TCP Proxy Load Balancing is implemented on GFEs that are distributed globally. If you choose the Premium Tier of Network Service Tiers, a TCP proxy load balancer is global. In Premium Tier, you can deploy backends in multiple regions, and the load balancer automatically directs user traffic to the closest region that has capacity. If you choose the Standard Tier, a TCP proxy load balancer can only direct traffic among backends in a single region. https://cloud.google.com/load-balancing/docs/load-balancing-overview#tcp-proxy-load-balancing
upvoted 26 times
...
Warren2020
Highly Voted 4 years, 8 months ago
A is the correct answer. D is not correct. CDN works with HTTP(s) traffic and requires caching, which is not a valid feature used for mail server
upvoted 9 times
...
lolanczos
Most Recent 1 month, 1 week ago
Selected Answer: A
It's A. TCP is the only one that is global (multiple regions). A Network Load Balancer is regional. The HTTP(S) LB is only for http/https traffic and would not be suitable. Cloud CDN doesn't even make sense as an option.
upvoted 1 times
...
SQLbox
7 months ago
TCP Proxy Load Balancing is a global load balancing service that works at Layer 4 (TCP/SSL) and is ideal for services like mail servers that use non-HTTP protocols, such as IMAP (port 993) or POP3 (port 995). • TCP Proxy Load Balancing supports global load balancing, meaning it can route traffic to the nearest backend based on the geographic location of the user. This ensures that customers are routed to the nearest mail server, optimizing performance and latency.
upvoted 1 times
...
Mr_MIXER007
7 months, 2 weeks ago
Selected Answer: A
Corrrect Answer is (A)
upvoted 1 times
...
usercism007
8 months ago
Select Answer: A
upvoted 1 times
...
3d9563b
8 months, 3 weeks ago
Selected Answer: A
TCP Proxy Load Balancing is the appropriate choice for globally routing TCP traffic, such as mail services, to the nearest server based on client location. It provides the necessary global load balancing capabilities to achieve this requirement.
upvoted 1 times
...
pico
10 months, 3 weeks ago
Selected Answer: B
why the other options are not the best fit: A. TCP Proxy Load Balancing: This is a global load balancing solution, but it might not be the most efficient for routing mail traffic based on proximity. C. Cross-Region Load Balancing with HTTP(S): This is designed for HTTP/HTTPS traffic, not mail protocols like POP3, SMTP, or IMAP. D. Cloud CDN: While Cloud CDN can cache content for faster delivery, it's not designed to handle real-time mail traffic routing.
upvoted 1 times
...
shanwford
11 months, 2 weeks ago
Selected Answer: A
I go for (A) because Network Load Balancers are Layer 4 regional, passthrough load balancers: so it didnt work as global LB ("different GCP regions")
upvoted 1 times
...
eeghai7thioyaiR4
11 months, 3 weeks ago
This is probably an old question 2-3 years ago, GCP introduces a "proxy network load balancer" So, in 2024, we have: - application load balancer, global, external-only, multi-region backends, only for HTTP and HTTPS, do not preserve clients' IP - "legacy" network load balancer (aka "passthrough"), external or internal, single-region, tcp or udp, preserve clients' IP - "new" network load balancer (aka "proxy"), global, external or internal, multi-region backends, tcp or udp, do not preserve clients' IP Here, we want: - global - external - multi-region - non-http => proxy network load balancer is the solution This maps to A (generic answer) or B (but only in proxy mode: passthrough won't work)
upvoted 2 times
eeghai7thioyaiR4
11 months, 1 week ago
On the other hand, B says "with forwarding rule". So this implies passthrough mode This left only A as a solution
upvoted 1 times
...
...
Roro_Brother
11 months, 3 weeks ago
Selected Answer: B
The company can achieve location-based routing of customers to the nearest mail server in Google Cloud Platform (GCP) using a Network Load Balancer (NLB)
upvoted 1 times
JOKERO
6 months, 3 weeks ago
NLB is not global
upvoted 1 times
...
...
dija123
1 year, 1 month ago
Selected Answer: B
The company can achieve location-based routing of customers to the nearest mail server in Google Cloud Platform (GCP) using a Network Load Balancer (NLB)
upvoted 2 times
...
okhascorpio
1 year, 1 month ago
There is no direct SMTP support in TCP proxy load balancer, hens it cannot be A. Google Cloud best practices recommend Network Load Balancing (NLB) for Layer 4 protocols like SMTP.
upvoted 3 times
...
ErenYeager
1 year, 2 months ago
Selected Answer: B
B) Create a Network Load Balancer to listen on TCP port 995 with a forwarding rule to forward traffic based on location. Explanation: Port 995 implies this is SSL/TLS encrypted mail traffic (IMAP). Network Load Balancing allows creating forwarding rules to route traffic based on IP location. This can send users to the closest backend mail server. TCP Proxy LB does not allow location-based routing. HTTP(S) LB is for HTTP only, not generic TCP traffic. Cloud CDN works at the HTTP level so cannot route TCP mail traffic. So a Network Load Balancer with IP based forwarding rules provides the capability to direct mail users to the closest regional mail server based on their location, meeting the requirement.
upvoted 3 times
...
[Removed]
1 year, 8 months ago
Selected Answer: A
"A" is the most suitable answer. Mail servers use SMTP which run on TCP. This excludes C, D which are HTTPs based. Option B is not global which excludes it as well. The following page elaborates on global external proxy load balancing under the premium tier which meets the needs for this question and aligns with option A https://cloud.google.com/load-balancing/docs/tcp#identify_the_mode
upvoted 5 times
...
gcpengineer
1 year, 10 months ago
Selected Answer: A
https://cloud.google.com/load-balancing/docs/tcp
upvoted 2 times
...
gcpengineer
1 year, 10 months ago
Selected Answer: B
B is the ans
upvoted 2 times
gcpengineer
1 year, 10 months ago
A is the ans. https://cloud.google.com/load-balancing/docs/tcp
upvoted 2 times
...
...

Question 29

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 29 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 29
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your team sets up a Shared VPC Network where project co-vpc-prod is the host project. Your team has configured the firewall rules, subnets, and VPN gateway on the host project. They need to enable Engineering Group A to attach a Compute Engine instance to only the 10.1.1.0/24 subnet.
What should your team grant to Engineering Group A to meet this requirement?

  • A. Compute Network User Role at the host project level.
  • B. Compute Network User Role at the subnet level.
  • C. Compute Shared VPC Admin Role at the host project level.
  • D. Compute Shared VPC Admin Role at the service project level.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
mozammil89
Highly Voted 5 years ago
The correct answer is B. https://cloud.google.com/vpc/docs/shared-vpc#svc_proj_admins
upvoted 22 times
...
okhascorpio
Most Recent 1 year, 1 month ago
Selected Answer: A
A is right. Source: https://cloud.google.com/compute/docs/access/iam#compute.networkUser
upvoted 1 times
stefanop
9 months ago
this permission can be granted only at project level, not subnet level
upvoted 1 times
...
...
ErenYeager
1 year, 2 months ago
Selected Answer: B
B) Compute Network User Role at the subnet level. The key points: In a Shared VPC, the subnets are configured in the host project. To allow another project to use a specific subnet, grant the Compute Network User role on that subnet. The Compute Shared VPC Admin role allows full administration, which is more privileged than needed. The Compute Network User role at the project level allows accessing all subnets, not just 10.1.1.0/24. So granting the Compute Network User role specifically on the 10.1.1.0/24 subnet gives targeted access to only that subnet, meeting the requirement. The subnet-level Compute Network User role provides the minimum necessary access to fulfill the need for Engineering Group A.
upvoted 4 times
...
Xoxoo
1 year, 6 months ago
Selected Answer: B
To enable Engineering Group A to attach a Compute Engine instance to only the 10.1.1.0/24 subnet in a Shared VPC setup, you should follow these steps: Grant the Compute Network User role at the service project level: This will allow members of Engineering Group A to create Compute Engine instances in their respective service projects. Grant the Compute Network User role specifically on the 10.1.1.0/24 subnet: To ensure that Engineering Group A can only attach instances to the desired subnet, you should grant the Compute Network User role directly at the subnet level. This way, they have the necessary permissions for that specific subnet without impacting other subnets in the Shared VPC. Option B, "Compute Network User Role at the subnet level," is the most appropriate choice in this scenario to achieve the desired outcome.
upvoted 3 times
...
shetniel
1 year, 6 months ago
The correct answer is B per least privilegd access rule
upvoted 2 times
...
[Removed]
1 year, 8 months ago
Selected Answer: B
"B" seems to be the most appropriate answer. See step 4 here: https://medium.com/google-cloud/google-cloud-shared-vpc-b33e0c9dd320
upvoted 2 times
...
aashissh
1 year, 12 months ago
Selected Answer: B
To enable Engineering Group A to attach a Compute Engine instance to only the 10.1.1.0/24 subnet in a Shared VPC Network where project co-vpc-prod is the host project, your team should grant Compute Network User Role at the subnet level. This will allow Engineering Group A to create and manage resources in the specified subnet while restricting them from making changes to other resources in the host project. Granting Compute Network User Role at the host project level would allow Engineering Group A to create and manage resources across all subnets in the host project, which is more than what is needed in this case. Compute Shared VPC Admin Role at either the host or service project level would give Engineering Group A too much control over the Shared VPC Network.
upvoted 2 times
...
mahi9
2 years, 1 month ago
Selected Answer: B
Admin role is not required
upvoted 2 times
...
Olen93
2 years, 1 month ago
The correct answer is B - https://cloud.google.com/compute/docs/access/iam#compute.networkUser states that the lowest level it can be granted on is project however I did confirm on my own companies shared VPC that roles/compute.networkUser can be granted at the subnet level
upvoted 1 times
...
amanp
2 years, 1 month ago
Selected Answer: A
Answer is A not B The least level the Compute Network User role can be assigned is at Project level and NOT subnet level. https://cloud.google.com/compute/docs/access/iam#compute.networkUser
upvoted 2 times
...
Meyucho
2 years, 4 months ago
Selected Answer: B
Grant network.user at subnet level: https://cloud.google.com/vpc/docs/provisioning-shared-vpc#networkuseratsubnet
upvoted 2 times
...
AwesomeGCP
2 years, 6 months ago
Selected Answer: B
The correct answer is B. https://cloud.google.com/vpc/docs/shared-vpc#svc_proj_admins
upvoted 2 times
...
rajananna
2 years, 6 months ago
Selected Answer: A
Lowest level grant is at Project level. https://cloud.google.com/compute/docs/access/iam#compute.networkUser
upvoted 2 times
Premumar
2 years, 5 months ago
Lowest level grant is at Subnet level in this option. Project level is a broad level access.
upvoted 2 times
...
...
tangac
2 years, 7 months ago
Selected Answer: A
based on that documentation it should clearly be done at the host project level : https://cloud.google.com/compute/docs/access/iam#compute.networkUser
upvoted 3 times
...
piyush_1982
2 years, 8 months ago
Selected Answer: B
https://cloud.google.com/vpc/docs/shared-vpc#svc_proj_admins
upvoted 1 times
...
Medofree
3 years ago
Selected Answer: B
The correct answer is b
upvoted 2 times
...
droppler
3 years, 9 months ago
The right one is b on my thinking, but i need to enable the other team to do the jobs, falls into D
upvoted 2 times
...

Question 30

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 30 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 30
Topic #: 1
[All Professional Cloud Security Engineer Questions]

A company migrated their entire data/center to Google Cloud Platform. It is running thousands of instances across multiple projects managed by different departments. You want to have a historical record of what was running in Google Cloud Platform at any point in time.
What should you do?

  • A. Use Resource Manager on the organization level.
  • B. Use Forseti Security to automate inventory snapshots.
  • C. Use Stackdriver to create a dashboard across all projects.
  • D. Use Security Command Center to view all assets across the organization.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
smart123
Highly Voted 4 years, 10 months ago
'B is the correct answer. Only Forseti security can have both 'past' and 'present' (i.e. historical) records of the resources. https://forsetisecurity.org/about/
upvoted 13 times
gcpengineer
1 year, 10 months ago
Forseti is outdated,no one uses it anymore
upvoted 5 times
...
...
mynk29
Highly Voted 3 years, 1 month ago
Outdated questions- you should use asset inventory now.
upvoted 11 times
...
lolanczos
Most Recent 1 month, 1 week ago
Selected Answer: B
B. Only Forseti keeps a complete record over time. SCC gives you how it looks now, but you cannot look into the past, which the scenario in the question requires.
upvoted 1 times
...
dlenehan
3 months, 3 weeks ago
Selected Answer: D
Old question. Forseti? SCC is the newest kid on the block and fits best here.
upvoted 1 times
...
BPzen
4 months, 1 week ago
Selected Answer: B
To maintain a historical record of what resources were running in Google Cloud Platform (GCP) at any point in time, you need a solution that periodically takes inventory snapshots of all assets. Forseti Security is specifically designed to automate this process, making it the best option for this use case.
upvoted 1 times
...
brpjp
6 months, 3 weeks ago
D - SCC is supported by Gemini and not Forseti.
upvoted 1 times
...
Roro_Brother
11 months, 3 weeks ago
Selected Answer: D
D is good answer in this case. Foreseti is outdated
upvoted 2 times
...
Kiroo
1 year ago
Selected Answer: D
It seems that for set is outdated and its features have been incorporated into security command center
upvoted 3 times
...
madcloud32
1 year, 1 month ago
Selected Answer: D
D is good answer in this case. Foreseti is outdated
upvoted 2 times
...
b6f53d8
1 year, 3 months ago
D is a good answer
upvoted 2 times
...
ced3eals
1 year, 5 months ago
Selected Answer: D
For an actual recent answer, D is the correct one.
upvoted 1 times
...
rottzy
1 year, 6 months ago
weird, Forseti - depreciated on Oct 2018, why was it even considered as an answer! 😉😁 https://forsetisecurity.org/news/2019/02/18/deprecate-1.0.html I'm going with option D
upvoted 1 times
...
cyberpunk21
1 year, 7 months ago
Selected Answer: A
B is old way of doing things and things got updated
upvoted 2 times
...
[Removed]
1 year, 8 months ago
Selected Answer: B
"B" is the correct answer. Forseti has been deprecated however it's capabilities and features (like asset inventory) have been incorporated into Security Command Center. https://cloud.google.com/security-command-center/docs/concepts-security-command-center-overview#inventory
upvoted 2 times
...
amanshin
1 year, 9 months ago
Correct is A Problem with Forseti - it's a third party tool, and it's sunset archived now due to lack of involvement. Do you really think Google would care to place it in test? Using Resource Manager on the organization level is a good way to have a historical record of what was running in Google Cloud Platform at any point in time. This is because Resource Manager provides a centralized view of all of your organization's resources, including projects, folders, and organization policies. It's a native tool, so I would go for answer A.
upvoted 1 times
...
FunkyB
2 years, 2 months ago
B is the correct answer. "Keep track of your environment Take inventory snapshots of your Google Cloud Platform (GCP) resources on a recurring cadence so that you always have a history of what was in your cloud." https://forsetisecurity.org/
upvoted 1 times
...
AwesomeGCP
2 years, 6 months ago
Selected Answer: B
B is the correct answer. Only Forseti security can have both 'past' and 'present' (i.e. historical) records of the resources. https://forsetisecurity.org/about/
upvoted 2 times
...

Question 31

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 31 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 31
Topic #: 1
[All Professional Cloud Security Engineer Questions]

An organization is starting to move its infrastructure from its on-premises environment to Google Cloud Platform (GCP). The first step the organization wants to take is to migrate its current data backup and disaster recovery solutions to GCP for later analysis. The organization's production environment will remain on- premises for an indefinite time. The organization wants a scalable and cost-efficient solution.
Which GCP solution should the organization use?

  • A. BigQuery using a data pipeline job with continuous updates
  • B. Cloud Storage using a scheduled task and gsutil
  • C. Compute Engine Virtual Machines using Persistent Disk
  • D. Cloud Datastore using regularly scheduled batch upload jobs
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
xhova
Highly Voted 4 years, 6 months ago
Ans is B. A cost efficient disaster recovery solution is needed not a data warehouse.
upvoted 24 times
...
madcloud32
Most Recent 7 months ago
Selected Answer: B
B is correct. It is about data backup, DR, not the database backup to GCP. BQ is not cost efficient compare to GCS
upvoted 1 times
...
tunstila
9 months, 2 weeks ago
the two keywords here are 'later' and 'cost-efficient'. The company doesnt even know when the analysis will occur but they want to store the data. Storing it in BigQuery will not be cost-efficient for later analysis. Cloud Storage Archive is the best deal here.
upvoted 1 times
Nachtwaker
7 months, 1 week ago
For later analysis means not now, so Bigquery is not required at this moment. Cloud storage content can be ingested in BigQuery 'later'. So should be B instead of A.
upvoted 1 times
...
...
W00kie
10 months ago
Selected Answer: A
Imho A: "The first step the organization wants to take is to migrate its current data backup and disaster recovery solutions to GCP for later analysis" both solutions are scalable and cost efficient, but cloud storage is not designed for queirng, therefore data analysis would be easier in BigQuery.
upvoted 1 times
...
[Removed]
1 year, 2 months ago
Selected Answer: B
The keyword in the question here is "cost-effective". Out of the 3 Disaster Recovery patterns (Cold, Warm, Hot HA), Cold is the most cost-effective which utilizes cloud storage. References: https://cloud.google.com/architecture/dr-scenarios-for-applications#cold-pattern-recovery-to-gcp https://cloud.google.com/architecture/dr-scenarios-planning-guide#use-cloud-storage-as-part-of-your-daily-backup-routine
upvoted 2 times
...
raj117
1 year, 2 months ago
Right Answer is B
upvoted 2 times
...
SMB2022
1 year, 2 months ago
Correct Answer: B
upvoted 2 times
...
AwesomeGCP
2 years ago
Selected Answer: B
B confirmed :-) https://cloud.google.com/solutions/dr-scenarios-planning-guide#use-cloud-storage-as-part-of-your-daily-backup-routine
upvoted 3 times
AzureDP900
1 year, 11 months ago
It is B
upvoted 2 times
...
...
giovy_82
2 years, 1 month ago
I would go for B, but a doubt remains: it is talking about Disaster Recovery solution, which could not only be related to data but also to VM and applications running inside VMs. any way B is more cost-efficient than A, considering also that data backup need to be moved to GCP.
upvoted 1 times
...
absipat
2 years, 4 months ago
B of course
upvoted 2 times
...
DebasishLowes
3 years, 6 months ago
Ans : B. Cloud storage is cost efficient one.
upvoted 4 times
...
[Removed]
3 years, 11 months ago
Ans - B
upvoted 2 times
...
CHECK666
4 years ago
B is the answer.
upvoted 2 times
...
paxjoshi
4 years, 1 month ago
B is the correct answer. They need the data for later analysis and they are looking for cost-effective service.
upvoted 2 times
...
aiwaai
4 years, 1 month ago
Correct Answer: A
upvoted 1 times
aiwaai
4 years, 1 month ago
I make corrections, B is Correct Answer.
upvoted 1 times
...
...
ArizonaClassics
4 years, 2 months ago
Answer B works for me as the type of workload to be stored is not stated or defined
upvoted 1 times
...
SilentSec
4 years, 2 months ago
B confirmed: https://cloud.google.com/solutions/dr-scenarios-planning-guide#use-cloud-storage-as-part-of-your-daily-backup-routine
upvoted 3 times
...

Question 32

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 32 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 32
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are creating an internal App Engine application that needs to access a user's Google Drive on the user's behalf. Your company does not want to rely on the current user's credentials. It also wants to follow Google-recommended practices.
What should you do?

  • A. Create a new Service account, and give all application users the role of Service Account User.
  • B. Create a new Service account, and add all application users to a Google Group. Give this group the role of Service Account User.
  • C. Use a dedicated G Suite Admin account, and authenticate the application's operations with these G Suite credentials.
  • D. Create a new service account, and grant it G Suite domain-wide delegation. Have the application use it to impersonate the user.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
mozammil89
Highly Voted 4 years, 6 months ago
I think the correct answer is D https://developers.google.com/admin-sdk/directory/v1/guides/delegation
upvoted 16 times
...
eeghai7thioyaiR4
Most Recent 5 months, 2 weeks ago
A and B are wrong Service Account User is use to grant someone the ability to impersonate a service account (ref: https://cloud.google.com/iam/docs/understanding-roles) So with those solution, the user could do some actions as the newly created service account We want the opposite: the service account need to do some actions as some user => D is the only working solution
upvoted 1 times
...
chagchoug
8 months ago
Selected Answer: D
Option A is false because it does not address the requirement of accessing a user's Google Drive on their behalf without relying on the user's credentials. Instead, option D, which involves granting domain-wide delegation to a service account for impersonation, is the recommended approach for this scenario.
upvoted 1 times
...
Olen93
1 year, 7 months ago
I'm not sure if D is the correct answer. The question specifically states that they want to follow Google-recommended practices and https://cloud.google.com/iam/docs/best-practices-service-accounts#domain-wide-delegation states to avoid domain-wide delegation. I do agree that D is the only way a service account can impersonate the user though
upvoted 1 times
...
Meyucho
1 year, 10 months ago
Selected Answer: D
A (Wrong) The access will be with the SA not the user's account. B (Wrong) Same as A. C. (Wrong) In this case the access is with the admins account, not user's. D. (CORRECT!) It's the only answer that really impersonate the user.
upvoted 3 times
...
AzureDP900
1 year, 11 months ago
D. Create a new service account, and grant it G Suite domain-wide delegation. Have the application use it to impersonate the user.
upvoted 1 times
...
AwesomeGCP
2 years ago
Selected Answer: D
correct answer is D https://developers.google.com/admin-sdk/directory/v1/guides/delegation
upvoted 2 times
...
Medofree
2 years, 6 months ago
Selected Answer: D
Clearly D is the right answer
upvoted 2 times
...
Rhehehe
2 years, 9 months ago
They are asking for google recommended practice. Does D says that?
upvoted 1 times
...
[Removed]
3 years, 11 months ago
Ans - D
upvoted 2 times
...
CHECK666
4 years ago
D is the answer.
upvoted 1 times
...
ArizonaClassics
4 years, 2 months ago
D is the best choice
upvoted 1 times
...
MarkDillon1075
4 years, 3 months ago
I agree D
upvoted 1 times
...

Question 33

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 33 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 33
Topic #: 1
[All Professional Cloud Security Engineer Questions]

A customer wants to move their sensitive workloads to a Compute Engine-based cluster using Managed Instance Groups (MIGs). The jobs are bursty and must be completed quickly. They have a requirement to be able to control the key lifecycle.
Which boot disk encryption solution should you use on the cluster to meet this customer's requirements?

  • A. Customer-supplied encryption keys (CSEK)
  • B. Customer-managed encryption keys (CMEK) using Cloud Key Management Service (KMS)
  • C. Encryption by default
  • D. Pre-encrypting files before transferring to Google Cloud Platform (GCP) for analysis
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
animesh54
Highly Voted 2 years, 5 months ago
Selected Answer: B
Customer Managed Encryption keys using KMS lets users control the key management and rotation policies and Compute Engine Disks support CMEKs
upvoted 6 times
...
AwesomeGCP
Highly Voted 2 years ago
Selected Answer: B
Correct Answer: B Explanation/Reference: Reference https://cloud.google.com/kubernetes-engine/docs/how-to/dynamic-provisioning-cmek
upvoted 5 times
...
trashbox
Most Recent 5 months, 1 week ago
Selected Answer: B
"Control over the key lifecycle" is the key. The KMS is the most appropriate solution.
upvoted 1 times
...

Question 34

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 34 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 34
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your company is using Cloud Dataproc for its Spark and Hadoop jobs. You want to be able to create, rotate, and destroy symmetric encryption keys used for the persistent disks used by Cloud Dataproc. Keys can be stored in the cloud.
What should you do?

  • A. Use the Cloud Key Management Service to manage the data encryption key (DEK).
  • B. Use the Cloud Key Management Service to manage the key encryption key (KEK).
  • C. Use customer-supplied encryption keys to manage the data encryption key (DEK).
  • D. Use customer-supplied encryption keys to manage the key encryption key (KEK).
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
mte_tech34
Highly Voted 4 years, 6 months ago
Answer is B. https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/customer-managed-encryption "The CMEK feature allows you to create, use, and revoke the key encryption key (KEK). Google still controls the data encryption key (DEK)."
upvoted 25 times
passtest100
4 years, 6 months ago
SHOULD BE A. NO envelope encryption is metioned in the question.
upvoted 5 times
Arad
3 years, 4 months ago
Correct answer is B, and A is wrong! envlope encryption is default mechanism in CMEK when used for Dataproc, please check this link: This PD and bucket data is encrypted using a Google-generated data encryption key (DEK) and key encryption key (KEK). The CMEK feature allows you to create, use, and revoke the key encryption key (KEK). Google still controls the data encryption key (DEK). For more information on Google data encryption keys, see Encryption at Rest. https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/customer-managed-encryption
upvoted 2 times
...
...
mynk29
3 years, 1 month ago
I agree but then should answer not be be C- customer supplied key?
upvoted 1 times
mynk29
3 years, 1 month ago
My bad I read it as Customer managed.. even though i now realised i wrote customer supplied. :D
upvoted 1 times
...
...
...
lolanczos
Most Recent 1 month, 1 week ago
Selected Answer: B
B The KEK is always managed by the KMS. The KMS never manages the DEK (so A is wrong). Both C/D are bad options, the customer supplying the encryption key defeats the purpose of the scenario in the question.
upvoted 1 times
...
BPzen
4 months, 1 week ago
Selected Answer: B
To manage encryption for Cloud Dataproc persistent disks, Google Cloud supports Customer-Managed Encryption Keys (CMEK) using Cloud Key Management Service (KMS). In this setup: Data Encryption Key (DEK): Google Cloud automatically generates and manages the DEK for encrypting the persistent disk data. Key Encryption Key (KEK): The KEK, managed in Cloud KMS, encrypts the DEK. This ensures the customer has control over key management operations, such as key rotation and deletion.
upvoted 1 times
...
Sarmee305
10 months ago
Selected Answer: B
Answer is B Cloud KMS allows you to manage KEKs, which in turn are used to encrypt the DEKs. DEKs are then used to encrypt the data. This separation ensures that the more sensitive KEK remains securely managed within the Cloud KMS
upvoted 1 times
...
dija123
1 year ago
Selected Answer: B
Agree with B
upvoted 1 times
...
amanshin
1 year, 9 months ago
The correct answer is B. Use the Cloud Key Management Service to manage the key encryption key (KEK). Cloud Dataproc uses a two-level encryption model, where the data encryption key (DEK) is encrypted with a key encryption key (KEK). The KEK is stored in Cloud Key Management Service (KMS), which allows you to create, rotate, and destroy the KEK as needed. If you use customer-supplied encryption keys (CSEKs) to manage the DEK, you will be responsible for managing the CSEKs yourself. This can be a complex and time-consuming task, and it can also increase the risk of data loss if the CSEKs are compromised.
upvoted 1 times
...
aashissh
1 year, 12 months ago
Selected Answer: A
Option B, using Cloud KMS to manage the key encryption key (KEK), is not necessary as persistent disks in Cloud Dataproc are already encrypted at rest using AES-256 encryption with a unique DEK generated and managed by Google.
upvoted 1 times
...
mahi9
2 years, 1 month ago
Selected Answer: B
The CMEK feature allows you to create, use, and revoke the key encryption key (KEK). Google still controls the data encryption key (DEK)."
upvoted 1 times
...
sameer2803
2 years, 1 month ago
there is a diagram in the link. if you understand the diagram, you will get the answer. https://cloud.google.com/sql/docs/mysql/cmek#with-cmek
upvoted 1 times
...
sameer2803
2 years, 1 month ago
Answer is B. the documentation says that Google does the data encryption by default and then that encryption key is again encrypted by KEK. which in turn can be managed by Customer.
upvoted 1 times
...
DA95
2 years, 3 months ago
Selected Answer: A
Option B, using the Cloud KMS to manage the key encryption key (KEK), is incorrect. The KEK is used to encrypt the DEK, so the DEK is the key that is managed by the Cloud KMS.
upvoted 1 times
...
Meyucho
2 years, 4 months ago
Selected Answer: A
B can be right but we never been asked about envelope encription... so... the solution is to use a customer managed Data Encryption Key
upvoted 1 times
...
AzureDP900
2 years, 5 months ago
B. Use the Cloud Key Management Service to manage the key encryption key (KEK).
upvoted 1 times
...
AwesomeGCP
2 years, 6 months ago
Selected Answer: B
Answer is B, https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/customer-managed-encryption
upvoted 4 times
...
giovy_82
2 years, 7 months ago
Selected Answer: B
In my opinion it should be B. reference : https://cloud.google.com/kms/docs/envelope-encryption How to encrypt data using envelope encryption The process of encrypting data is to generate a DEK locally, encrypt data with the DEK, use a KEK to wrap the DEK, and then store the encrypted data and the wrapped DEK. The KEK never leaves Cloud KMS.
upvoted 2 times
...
piyush_1982
2 years, 8 months ago
Selected Answer: A
I think the answer is A. DEK (Data encryption Key ) is the key which is used to encrypt the data. It can be both customer-managed or customer supplied in terms of GCP> https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/customer-managed-encryption The link above states "This PD and bucket data is encrypted using a Google-generated data encryption key (DEK) and key encryption key (KEK). The CMEK feature allows you to create, use, and revoke the key encryption key (KEK). Google still controls the data encryption key (DEK)."
upvoted 1 times
...
absipat
2 years, 10 months ago
b of course
upvoted 1 times
...

Question 35

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 35 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 35
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are a member of the security team at an organization. Your team has a single GCP project with credit card payment processing systems alongside web applications and data processing systems. You want to reduce the scope of systems subject to PCI audit standards.
What should you do?

  • A. Use multi-factor authentication for admin access to the web application.
  • B. Use only applications certified compliant with PA-DSS.
  • C. Move the cardholder data environment into a separate GCP project.
  • D. Use VPN for all connections between your office and cloud environments.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
jonclem
Highly Voted 3 years ago
I'd go for answer C myself. https://cloud.google.com/solutions/best-practices-vpc-design
upvoted 22 times
...
[Removed]
Highly Voted 2 years, 5 months ago
Ans - C https://cloud.google.com/solutions/pci-dss-compliance-in-gcp#setting_up_your_payment-processing_environment
upvoted 7 times
...
AzureDP900
Most Recent 5 months, 1 week ago
answer is C
upvoted 1 times
...
Medofree
1 year ago
Selected Answer: C
Projets are units of isolationm the answer is C.
upvoted 2 times
...
CHECK666
2 years, 6 months ago
C is the answer.
upvoted 1 times
...
smart123
2 years, 10 months ago
The Answer is C. Check "Setting up your payment-processing environment" section in https://cloud.google.com/solutions/pci-dss-compliance-in-gcp. In the question, it is mentioned that it is the same environment for card processing as the Web App and Data processing and that is not recommended.
upvoted 4 times
...
xhova
3 years ago
Definitely C
upvoted 1 times
...

Question 37

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 37 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 37
Topic #: 1
[All Professional Cloud Security Engineer Questions]

A company allows every employee to use Google Cloud Platform. Each department has a Google Group, with all department members as group members. If a department member creates a new project, all members of that department should automatically have read-only access to all new project resources. Members of any other department should not have access to the project. You need to configure this behavior.
What should you do to meet these requirements?

  • A. Create a Folder per department under the Organization. For each department's Folder, assign the Project Viewer role to the Google Group related to that department.
  • B. Create a Folder per department under the Organization. For each department's Folder, assign the Project Browser role to the Google Group related to that department.
  • C. Create a Project per department under the Organization. For each department's Project, assign the Project Viewer role to the Google Group related to that department.
  • D. Create a Project per department under the Organization. For each department's Project, assign the Project Browser role to the Google Group related to that department.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
ownez
Highly Voted 3 years, 6 months ago
Shouldn't it be A? Project Browser has least permissions comparing to Project Viewer. The question is about have read-access to all new project resources. roles/browser - Read access to browse the hierarchy for a project, including the folder, organization, and IAM policy. This role doesn't include permission to view resources in the project. https://cloud.google.com/iam/docs/understanding-roles#project-roles
upvoted 21 times
singhjoga
3 years, 3 months ago
Correct, it is A. Project Browser does not have access to the resources inside the project, which is the requirement in the question.
upvoted 8 times
...
...
uiuiui
Most Recent 5 months ago
Selected Answer: A
A please
upvoted 1 times
...
IlDave
1 year, 1 month ago
Selected Answer: A
Create a Folder per department under the Organization. For each department's Folder, assign the Project Viewer role to the Google Group related to that department. Grant viewer to the folder fits with automatically get permission on project creation
upvoted 2 times
...
mahi9
1 year, 1 month ago
Selected Answer: A
Create a Folder per department under the Organization. For each department's Folder, assign the Project Viewer role to the Google Group related to that department.
upvoted 1 times
...
Meyucho
1 year, 4 months ago
Selected Answer: A
Who voted C!?!??!?! The answer is A!!!!
upvoted 1 times
...
AwesomeGCP
1 year, 6 months ago
Selected Answer: A
Correct answer - A https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy
upvoted 1 times
...
piyush_1982
1 year, 8 months ago
Selected Answer: C
The correct answer is definitely C. Let's divide the question into 2 parts: 1st: Role: Key requirement: all members of that department should automatically have read-only access to all new project resources. > The project browser role only allows read access to browse the hierarchy for a project, including the folder, organization, and allow policy. This role doesn't include permission to view resources in the project. Hence the options B and D are not relevant as they both are browser roles which DO NOT provide access to project resources. 2nd: Option A creates a Folder per department and C creates project per department. However, Project viewer role is only applied at the project level. Hence the correct answer is C which creates projects per department under organization .
upvoted 2 times
Meyucho
1 year, 4 months ago
But... if you dont have a folder per department.. where will be all new projects created by users???? you will have to manually edit permissions every time!!!! Using folders yu set the permitions once and then the only task you shoul do is to maintain the proper group assignment
upvoted 2 times
...
...
alvjtc
1 year, 9 months ago
Selected Answer: A
It's A, Project Viewer. Project Browser doesn't allow users to see resources, only find the project in the hierarchy.
upvoted 1 times
...
syllox
2 years, 11 months ago
It's A , browser is : Read access to browse the hierarchy for a project, including the folder, organization, and IAM policy. This role doesn't include permission to view resources in the project. https://cloud.google.com/iam/docs/understanding-roles#project-roles
upvoted 3 times
...
[Removed]
2 years, 12 months ago
either A or C because must be project viewer ,browser is not enough.https://cloud.google.com/iam/docs/understanding-roles
upvoted 1 times
...
[Removed]
2 years, 12 months ago
Why not A?
upvoted 1 times
...
desertlotus1211
3 years ago
The answer is A: https://stackoverflow.com/questions/54778596/whats-the-difference-between-project-browser-role-and-project-viewer-role-in-go#:~:text=8-,What's%20the%20difference%20between%20Project%20Browser%20role%20and,role%20in%20Google%20Cloud%20Platform&text=According%20to%20the%20console%20popup,read%20access%20to%20those%20resources.
upvoted 2 times
...
CloudTrip
3 years, 1 month ago
I think it's B. As the question says all members of that department should automatically have read-only access to all new project resources but browser will only provide the get, list permissions not read only permission so viewer seems to be more accurate here. roles/browser Read access to browse the hierarchy for a project, including the folder, organization, and IAM policy. This role doesn't include permission to view resources in the project. resourcemanager.folders.get resourcemanager.folders.list resourcemanager.organizations.get resourcemanager.projects.get resourcemanager.projects.getIamPolicy resourcemanager.projects.list roles/viewer Viewer Permissions for read-only actions that do not affect state, such as viewing (but not modifying) existing resources or data.
upvoted 1 times
...
subhala
3 years, 4 months ago
Question says - If a department member creates a new project, all members of that department should automatically have read-only access to all new project resources. and @ownez provided documentation that says - browser role doesn't include perm to view resources in the project. Hence B is the right answer.
upvoted 1 times
...
Fellipo
3 years, 5 months ago
A it´s OK
upvoted 2 times
...
[Removed]
3 years, 5 months ago
Ans - A
upvoted 2 times
...
cipher90
3 years, 6 months ago
Answer is B: "have read-only access to all new project resources." So it has to be in a folder to cascade the permissions to new projects carried.
upvoted 1 times
Meyucho
1 year, 4 months ago
If you do that the other members of the department can't access to the resourses.. just list the project in the folder
upvoted 1 times
...
...

Question 38

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 38 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 38
Topic #: 1
[All Professional Cloud Security Engineer Questions]

A customer's internal security team must manage its own encryption keys for encrypting data on Cloud Storage and decides to use customer-supplied encryption keys (CSEK).
How should the team complete this task?

  • A. Upload the encryption key to a Cloud Storage bucket, and then upload the object to the same bucket.
  • B. Use the gsutil command line tool to upload the object to Cloud Storage, and specify the location of the encryption key.
  • C. Generate an encryption key in the Google Cloud Platform Console, and upload an object to Cloud Storage using the specified key.
  • D. Encrypt the object, then use the gsutil command line tool or the Google Cloud Platform Console to upload the object to Cloud Storage.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
DebasishLowes
Highly Voted 4 years, 1 month ago
Ans : B. Because if you encrypt the object using CSEK, then you can't use google cloud console to upload the object.
upvoted 15 times
...
FatCharlie
Highly Voted 4 years, 4 months ago
The fact is, both B & D would work. I lean towards B because it allows you to manage the file using GCP tools later as long as you keep that key around. B is definitely incomplete though, as the boto file does need to be updated.
upvoted 7 times
gcpengineer
1 year, 10 months ago
it mentions u cant use console for CSEK
upvoted 1 times
...
...
3d9563b
Most Recent 8 months, 2 weeks ago
Selected Answer: B
Using the gsutil command-line tool with the appropriate options to specify the CSEK during the upload process is the proper way to manage customer-supplied encryption keys for Cloud Storage. This ensures that the data is encrypted using the provided key without the key being stored on Google's servers
upvoted 1 times
...
3d9563b
8 months, 3 weeks ago
Selected Answer: D
With Customer-Supplied Encryption Keys (CSEK), you handle the encryption of the data yourself and then upload the encrypted data to Cloud Storage, ensuring you provide the necessary encryption key when required for access control. This method ensures that you maintain control over the encryption process and the security of your data.
upvoted 1 times
...
salamKvelas
10 months, 2 weeks ago
`gcloud storage` you can point to a CSEK, but `gsutil` you can not
upvoted 1 times
...
shanwford
1 year ago
Selected Answer: B
Should be (B) - but IMHO "gsutil" is legacy tool, it works with "gcloud": gcloud storage cp SOURCE_DATA gs://BUCKET_NAME/OBJECT_NAME --encryption-key=YOUR_ENCRYPTION_KEY
upvoted 2 times
...
ppandher
1 year, 5 months ago
I have encrypt the object using 256 Encryption method, When I create a Bucket it gave me option of encryption as Google Managed Keys and Customer Managed keys but NO CSEK, I opted Google Managed as I do not have CMEK created, Now I create that Bucket.I upload my encrypted file to that bucket using Console, now the content of that file shows as Google managed not a CSEK. To my understanding you need to generate the keys in console encrypt that object and then upload that way it will show on that object as encryption of CSEK. Option B I opt now.
upvoted 1 times
...
mildi
1 year, 9 months ago
Answer D with removed or from console D. Encrypt the object, then use the gsutil command line tool or the Google Cloud Platform Console to upload the object to Cloud Storage. D. Encrypt the object, then use the gsutil command line tool
upvoted 1 times
...
twpower
1 year, 10 months ago
Selected Answer: B
Ans is B
upvoted 1 times
...
gcpengineer
1 year, 10 months ago
Selected Answer: B
B is the ans . https://cloud.google.com/storage/docs/encryption/customer-supplied-keys
upvoted 2 times
...
TQM__9MD
1 year, 11 months ago
Selected Answer: D
Object encryption is required. B does not encrypt objects.
upvoted 2 times
...
aashissh
1 year, 12 months ago
Selected Answer: D
To use customer-supplied encryption keys (CSEK) for encrypting data on Cloud Storage, the security team must encrypt the object first using the encryption key and then use the gsutil command line tool or the Google Cloud Platform Console to upload the object to Cloud Storage. Therefore, the correct answer is: D. Encrypt the object, then use the gsutil command line tool or the Google Cloud Platform Console to upload the object to Cloud Storage.
upvoted 2 times
gcpengineer
1 year, 10 months ago
it mentions u cant use console for CSEK
upvoted 1 times
...
...
AwesomeGCP
2 years, 6 months ago
Selected Answer: B
https://cloud.google.com/storage/docs/encryption/customer-supplied-keys Answer B
upvoted 2 times
...
GHOST1985
2 years, 6 months ago
Selected Answer: B
you can't use google cloud console to upload the object. https://cloud.google.com/storage/docs/encryption/using-customer-supplied-keys#upload_with_your_encryption_key
upvoted 1 times
...
absipat
2 years, 10 months ago
D of course
upvoted 1 times
...
Aiffone
2 years, 10 months ago
I will go with D because encrypting the object before uploading means the cutomer manages thier own key. A is not correct because its not a good practice to upload encryption key to storage object along with the encrypted object. B is not correct because specifying the location of the encryption key does not change anything C means Google manages the key.
upvoted 1 times
...
[Removed]
3 years, 12 months ago
CD are not right because Google Cloud Console does not support CSEK. must choose from A and B
upvoted 1 times
...

Question 39

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 39 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 39
Topic #: 1
[All Professional Cloud Security Engineer Questions]

A customer has 300 engineers. The company wants to grant different levels of access and efficiently manage IAM permissions between users in the development and production environment projects.
Which two steps should the company take to meet these requirements? (Choose two.)

  • A. Create a project with multiple VPC networks for each environment.
  • B. Create a folder for each development and production environment.
  • C. Create a Google Group for the Engineering team, and assign permissions at the folder level.
  • D. Create an Organizational Policy constraint for each folder environment.
  • E. Create projects for each environment, and grant IAM rights to each engineering user.
Show Suggested Answer Hide Answer
Suggested Answer: BC 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
mozammil89
Highly Voted 3 years, 6 months ago
B and C should be correct...
upvoted 23 times
...
mahi9
Most Recent 7 months, 2 weeks ago
Selected Answer: BC
B and C are viable
upvoted 2 times
...
Meyucho
10 months, 4 weeks ago
Selected Answer: BC
Which Policy Constriaint allow to manage permission?!??!?! D is not an option. The answer is B and C
upvoted 2 times
...
AwesomeGCP
1 year ago
Selected Answer: BC
B and C are the correct answers!!
upvoted 2 times
...
danielklein09
1 year, 7 months ago
B is correct But, if you make 1 group (by choosing option C) how you manage the permission for dev environment ? since you have only 1 group, you will offer the same access for all 300 engineers (that are in that group) to dev and prod environment, so this will not answer the question: efficiently manage IAM permissions between users in the development and production environment projects
upvoted 4 times
...
Ksrp
1 year, 7 months ago
CE - A general recommendation is to have one project per application per environment. For example, if you have two applications, "app1" and "app2", each with a development and production environment, you would have four projects: app1-dev, app1-prod, app2-dev, app2-prod. This isolates the environments from each other, so changes to the development project do not accidentally impact production, and gives you better access control, since you can (for example) grant all developers access to development projects but restrict production access to your CI/CD pipeline. https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-organizations
upvoted 1 times
...
Jane111
2 years, 5 months ago
A - no VPC required B - yes - pre req C - Yes D - likely but C is first E - not scalable/feasible/advisable
upvoted 2 times
...
DebasishLowes
2 years, 6 months ago
Ans : BC
upvoted 1 times
...
[Removed]
2 years, 11 months ago
Ans - BC
upvoted 1 times
...
CHECK666
3 years ago
B,C is the answer. Create a folder for each env and assign IAM policies to the group.
upvoted 2 times
...
MohitA
3 years, 1 month ago
BC is the right answer, create folder for each env and assign IAM policies to group
upvoted 1 times
...
aiwaai
3 years, 1 month ago
Correct Answer: CE
upvoted 1 times
aiwaai
3 years, 1 month ago
made correction CE -> BC
upvoted 2 times
...
...
xhova
3 years, 6 months ago
B&C D does not help efficiently manage IAM. Effective IAM implies using groups.
upvoted 2 times
smart123
3 years, 3 months ago
Organization policy is used on resources and not the users. Hence option 'D' cannot be right.
upvoted 2 times
...
...
jonclem
3 years, 6 months ago
I'd say B and D are correct
upvoted 1 times
...

Question 40

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 40 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 40
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You want to evaluate your organization's Google Cloud instance for PCI compliance. You need to identify Google's inherent controls.
Which document should you review to find the information?

  • A. Google Cloud Platform: Customer Responsibility Matrix
  • B. PCI DSS Requirements and Security Assessment Procedures
  • C. PCI SSC Cloud Computing Guidelines
  • D. Product documentation for Compute Engine
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
3d9563b
8 months, 3 weeks ago
Selected Answer: A
The Customer Responsibility Matrix is the most relevant document for identifying Google's inherent controls related to PCI compliance, as it explicitly details the security controls managed by Google versus those managed by the customer.
upvoted 1 times
...
okhascorpio
1 year, 1 month ago
Selected Answer: A
Probably an outdated question, because there is a specific PCI DSS responsibility matrix available source: https://cloud.google.com/security/compliance/pci-dss but a close enough answer is A because it directly addresses Google's inherent controls while others don't.
upvoted 1 times
...
techdsmart
1 year, 1 month ago
but here controls isn't the same as responsibility? Don't understand how A is the answer since by controls we are referring this from a security and compliance perspective i.e. security controls. C is still the correct answer.
upvoted 1 times
...
rottzy
1 year, 6 months ago
answer is A, https://cloud.google.com/files/GCP_Client_Facing_Responsibility_Matrix_PCI_2018.pdf
upvoted 1 times
...
Xoxoo
1 year, 6 months ago
Selected Answer: A
To identify Google's inherent controls for PCI compliance, you should review: A. Google Cloud Platform: Customer Responsibility Matrix The Google Cloud Platform: Customer Responsibility Matrix provides information about the shared responsibility model between Google Cloud and the customer. It outlines which security controls are managed by Google and which are the customer's responsibility. This document will help you understand Google's inherent controls as they relate to PCI compliance.
upvoted 2 times
...
amanshin
1 year, 9 months ago
The correct answer is A. Google Cloud Platform: Customer Responsibility Matrix. The Google Cloud Platform: Customer Responsibility Matrix (CRM) is a document that outlines the responsibilities of Google and its customers for PCI compliance. The CRM identifies the inherent controls that Google provides, which are the security controls that are built into Google Cloud Platform. The PCI DSS Requirements and Security Assessment Procedures (SAQs) are a set of requirements that organizations must meet to be PCI compliant. The SAQs do not identify Google's inherent controls. The PCI SSC Cloud Computing Guidelines are a set of guidelines that organizations can use to help them achieve PCI compliance when using cloud computing services. The guidelines do not identify Google's inherent controls. The product documentation for Compute Engine is a document that provides information about the features and capabilities of Compute Engine. The documentation does not identify Google's inherent controls.
upvoted 1 times
...
gcpengineer
1 year, 10 months ago
Selected Answer: C
C is the ans
upvoted 2 times
...
gcpengineer
1 year, 11 months ago
Selected Answer: B
B is the ans. as the pci-dss req in gcp
upvoted 1 times
gcpengineer
1 year, 10 months ago
C is the ans
upvoted 1 times
...
...
aashissh
1 year, 12 months ago
Selected Answer: A
The answer is A. Google Cloud Platform: Customer Responsibility Matrix. This document outlines the responsibilities of both the customer and Google for securing the cloud environment and is an important resource for understanding Google's inherent controls for PCI compliance. The PCI DSS Requirements and Security Assessment Procedures and the PCI SSC Cloud Computing Guidelines are both helpful resources for understanding the PCI compliance requirements, but they do not provide information on Google's specific inherent controls. The product documentation for Compute Engine is focused on the technical aspects of using that service and is unlikely to provide a comprehensive overview of Google's inherent controls.
upvoted 3 times
...
1explorer
2 years ago
https://cloud.google.com/architecture/pci-dss-compliance-in-gcp B is correct answer
upvoted 3 times
...
tailesley
2 years, 1 month ago
It is B:: The PCI DSS Requirements and Security Assessment Procedures is the document that outlines the specific requirements for PCI compliance. It is created and maintained by the Payment Card Industry Security Standards Council (PCI SSC), which is the organization responsible for establishing and enforcing security standards for the payment card industry. This document is used by auditors to evaluate the security of an organization's payment card systems and processes. While the other options may provide information about Google's security controls and the customer's responsibilities for security, they do not provide the specific requirements for PCI compliance that the PCI DSS document does.
upvoted 3 times
...
AwesomeGCP
2 years, 6 months ago
Selected Answer: A
A. Google Cloud Platform: Customer Responsibility Matrix
upvoted 1 times
...
tangac
2 years, 7 months ago
Selected Answer: A
https://services.google.com/fh/files/misc/gcp_pci_shared_responsibility_matrix_aug_2021.pdf
upvoted 2 times
...

Question 41

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 41 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 41
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your company runs a website that will store PII on Google Cloud Platform. To comply with data privacy regulations, this data can only be stored for a specific amount of time and must be fully deleted after this specific period. Data that has not yet reached the time period should not be deleted. You want to automate the process of complying with this regulation.
What should you do?

  • A. Store the data in a single Persistent Disk, and delete the disk at expiration time.
  • B. Store the data in a single BigQuery table and set the appropriate table expiration time.
  • C. Store the data in a single Cloud Storage bucket and configure the bucket's Time to Live.
  • D. Store the data in a single BigTable table and set an expiration time on the column families.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
KILLMAD
Highly Voted 4 years, 7 months ago
I believe the Answer is C not B. This isn't data which needs to be analyzed, so I don't understand why would it be stored in BQ when having data stored in GCS seems much more reasonable. I think the only thing about answer C which throws me off is the fact that they don't mention object life cycle management
upvoted 14 times
mozammil89
4 years, 6 months ago
Answer C is correct. The TTL is common use case of Cloud Storage life cycle management. Here is what GCP says: "To support common use cases like setting a Time to Live (TTL) for objects, retaining noncurrent versions of objects, or "downgrading" storage classes of objects to help manage costs, Cloud Storage offers the Object Lifecycle Management feature. This page describes the feature as well as the options available when using it. To learn how to enable Object Lifecycle Management, and for examples of lifecycle policies, see Managing Lifecycles." https://cloud.google.com/storage/docs/lifecycle
upvoted 7 times
PleeO
4 months, 2 weeks ago
This answer is still valid till 2024
upvoted 1 times
...
...
...
trashbox
Most Recent 5 months, 1 week ago
Selected Answer: C
Bucket lock and TTL are the key features of Cloud Storage.
upvoted 1 times
...
Bypoo
7 months, 3 weeks ago
Selected Answer: C
Cloud Storage life cycle management
upvoted 1 times
...
Echizen06
1 year, 1 month ago
Selected Answer: C
Answer is C
upvoted 2 times
...
cyberpunk21
1 year, 1 month ago
B is correct, all forgot this "Data that has not yet reached the time period should not be deleted." from question this means data is keep on updating if we enforce TTL for a bucker the whole bucket will be deleted including updated data, so with Big query we do updating using pipeline jobs and delete data using expiration time
upvoted 1 times
...
mahi9
1 year, 7 months ago
Selected Answer: C
store it in a bucket for TTL
upvoted 2 times
...
PST21
1 year, 9 months ago
CS does not delete promptly , hence BQ as it is sensitive data
upvoted 1 times
...
csrazdan
1 year, 10 months ago
Selected Answer: B
Life Cycle Management for Cloud storage is used to manage the Storage class to save cost. For data management, you have set retention time on the bucket. I will opt for B as the correct answer.
upvoted 1 times
...
AwesomeGCP
2 years ago
Selected Answer: C
Correct Answer: C
upvoted 2 times
...
giovy_82
2 years, 1 month ago
I would go for C, but all the 4 answers are in my opinion incomplete. all of them say "single" bucket or table, which means that if different dated rows/elements are stored in the same bucket or table, they will expire together and be deleted probably before their real expiration time. so i expected to see partitioning or multiple bucket.
upvoted 2 times
...
mynk29
2 years, 7 months ago
Outdated question again- should be bucket locks now.
upvoted 1 times
...
DebasishLowes
3 years, 6 months ago
Ans : C
upvoted 2 times
...
[Removed]
3 years, 11 months ago
Ans - C
upvoted 4 times
...
aiwaai
4 years, 1 month ago
Correct Answer: C
upvoted 3 times
...
Ganshank
4 years, 4 months ago
The answers need to be worded better. If we're taking the terms literally as specified in the options, then C cannot be the correction answer since there's no Time to Live configuration for a GCS bucket, only Lifecycle Policy. With BigQuery, there is no row-level expiration, although we could create this behavior using Partitioned Tables. So this could be a potential answer. D - it is possible to simulate cell-level TTL (https://cloud.google.com/bigtable/docs/gc-cell-level), so this too could be a potential answer, especially when different cells need different TTLs. Betweem B & D, BigQuery follows a pay-as-you-go model and its storage costs are comparable to GCS storage costs. So this would be the more appropriate solution.
upvoted 3 times
smart123
4 years, 3 months ago
The Buckets do have "Time to Live" feature. https://cloud.google.com/storage/docs/lifecycle Hence 'C' is the answer
upvoted 4 times
...
...
jonclem
4 years, 6 months ago
I believe B is correct. Setting a TTL of 14 days on the bucket via LifeCycle will not cause the bucket itself to be deleted after 14 days, instead it will cause each object uploaded to that bucket to be deleted 14 days after it was created
upvoted 3 times
xhova
4 years, 6 months ago
Answer is C. You dont need the bucket to be deleted, you need the PII data stored to be deleted.
upvoted 6 times
...
...

Question 42

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 42 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 42
Topic #: 1
[All Professional Cloud Security Engineer Questions]

A DevOps team will create a new container to run on Google Kubernetes Engine. As the application will be internet-facing, they want to minimize the attack surface of the container.
What should they do?

  • A. Use Cloud Build to build the container images.
  • B. Build small containers using small base images.
  • C. Delete non-used versions from Container Registry.
  • D. Use a Continuous Delivery tool to deploy the application.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
xhova
Highly Voted 5 years ago
Ans is B Small containers usually have a smaller attack surface as compared to containers that use large base images. https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-how-and-why-to-build-small-container-images
upvoted 31 times
smart123
4 years, 9 months ago
I agree
upvoted 2 times
...
...
3d9563b
Most Recent 8 months, 3 weeks ago
Selected Answer: B
Building small containers using minimal and well-maintained base images directly reduces the attack surface and improves the security posture of your containers when they are deployed on GKE.
upvoted 1 times
...
okhascorpio
1 year, 1 month ago
Selected Answer: B
the correct answer is having as few tools in your image as possible, Source: Remove unnecessary tools https://cloud.google.com/architecture/best-practices-for-building-containers?hl=en I guess it can be achieved by option "B" building a small container from a small source image.
upvoted 1 times
...
Afe3saa7
1 year, 2 months ago
Selected Answer: B
A. Use Cloud Build to build the container images. Will give you the tools to build an image but not ensure any risk reduction B. Build small containers using small base images. Images with a smaller footprint, stripped of all binaries/libraries/functions that are not used will make it harder for an attacker to find leverage to move laterally or vertically, hence >>reducing the attack/risk surface<< for the image. C. Delete non-used versions from Container Registry. Non-used images are not running live and hence are not exploitable. Removing non-used images from the registry will not reduce the attack surface of the running application. D. Use a Continuous Delivery tool to deploy the application. Same as A.
upvoted 1 times
...
Xoxoo
1 year, 6 months ago
Selected Answer: B
To minimize the attack surface of a container that will run on Google Kubernetes Engine and be internet-facing, the DevOps team should: B. Build small containers using small base images. Building small containers using minimal base images reduces the attack surface by eliminating unnecessary software and dependencies, which can potentially contain vulnerabilities. This approach enhances security and reduces the risk of potential attacks. Using small base images, such as Alpine Linux or distroless images, is a best practice for container security.
upvoted 3 times
...
civilizador
1 year, 8 months ago
Answer is B, because this GCP exam, the GCP docs are always source of truth even though you might not be agree with them occasionally but even if you are not agree you need to choose the answer proposed in GCP docs as the best practice. Here is the link to google official best practices for building containers. and here is the snippet regarding this particular question: https://cloud.google.com/architecture/best-practices-for-building-containers#build-the-smallest-image-possible Build the smallest image possible Building a smaller image offers advantages such as faster upload and download times, which is especially important for the cold start time of a pod in Kubernetes: the smaller the image, the faster the node can download it. However, building a small image can be difficult because you might inadvertently include build dependencies or unoptimized layers in your final image.
upvoted 2 times
...
[Removed]
1 year, 8 months ago
Selected Answer: B
"B" For smaller attacker surface, use smaller images by removing any unnecessary tools/software from the image. https://cloud.google.com/solutions/best-practices-for-building-containers
upvoted 2 times
...
alleinallein
2 years ago
Selected Answer: C
Importance: MEDIUM To protect your apps from attackers, try to reduce the attack surface of your app by removing any unnecessary tools. https://cloud.google.com/architecture/best-practices-for-building-containers
upvoted 2 times
adb4007
1 year, 4 months ago
So build a small image is the answer, not ?
upvoted 1 times
...
...
mahi9
2 years, 1 month ago
Selected Answer: C
it is viable
upvoted 1 times
...
rotorclear
2 years, 5 months ago
Selected Answer: B
B definitely
upvoted 1 times
...
AwesomeGCP
2 years, 6 months ago
Selected Answer: B
B is the correct answer.
upvoted 1 times
...
zellck
2 years, 6 months ago
Selected Answer: B
B is the answer.
upvoted 1 times
...
jitu028
2 years, 6 months ago
Ans is B - https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-how-and-why-to-build-small-container-images Security and vulnerabilities Aside from performance, there are significant security benefits from using smaller containers. Small containers usually have a smaller attack surface as compared to containers that use large base images.
upvoted 3 times
...
giovy_82
2 years, 7 months ago
Selected Answer: B
the only answer that will really reduce attack surface while exposing apps to internet is B, small containers (e.g. single web page?)
upvoted 3 times
...
Medofree
3 years ago
B. Because you will have less programs in the image thus less vulnerabilities
upvoted 1 times
...
lxs
3 years, 4 months ago
Selected Answer: C
A. Use Cloud Build to build the container images. If you build a container using Cloud Build or not the surface is the same B. Build small containers using small base images. It is indeed best practice, but I doubt if small base images can reduce the surface. It is still the same app version with the same vulnerabilities etc. C. Delete non-used versions from Container Registry. Unused, historical versions are additional attack surface. attacker can exploit old, unpatched image which indeed the surface extention. D. Use a Continuous Delivery tool to deploy the application. This is just a method of image delivery. The app is the same.
upvoted 3 times
Afe3saa7
1 year, 2 months ago
non-used images in containter registry are as they suggest not running live, hence are not exploitable. deleting images in the registry will not change the attack surface of the mentioned image.
upvoted 1 times
...
...
DebasishLowes
4 years ago
Ans : B. Small the base image there is less vulnerability and less chance of attack.
upvoted 2 times
...

Question 43

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 43 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 43
Topic #: 1
[All Professional Cloud Security Engineer Questions]

While migrating your organization's infrastructure to GCP, a large number of users will need to access GCP Console. The Identity Management team already has a well-established way to manage your users and want to keep using your existing Active Directory or LDAP server along with the existing SSO password.
What should you do?

  • A. Manually synchronize the data in Google domain with your existing Active Directory or LDAP server.
  • B. Use Google Cloud Directory Sync to synchronize the data in Google domain with your existing Active Directory or LDAP server.
  • C. Users sign in directly to the GCP Console using the credentials from your on-premises Kerberos compliant identity provider.
  • D. Users sign in using OpenID (OIDC) compatible IdP, receive an authentication token, then use that token to log in to the GCP Console.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
sudarchary
Highly Voted 2 years, 8 months ago
Selected Answer: B
https://cloud.google.com/architecture/identity/federating-gcp-with-active-directory-configuring-single-sign-on
upvoted 7 times
...
DebasishLowes
Highly Voted 3 years, 7 months ago
Ans : B
upvoted 5 times
...
dbf0a72
Most Recent 9 months, 1 week ago
Selected Answer: B
https://cloud.google.com/architecture/identity/federating-gcp-with-active-directory-configuring-single-sign-on
upvoted 1 times
...
AwesomeGCP
2 years ago
Selected Answer: B
https://cloud.google.com/architecture/identity/federating-gcp-with-active-directory-configuring-single-sign-on
upvoted 2 times
...
absipat
2 years, 4 months ago
B of course
upvoted 2 times
...
ThisisJohn
2 years, 9 months ago
Selected Answer: D
My vote goes for D. From the blog post linked below " users’ passwords are not synchronized by default. Only the identities are synchronized, unless you make an explicit choice to synchronize passwords (which is not a best practice and should be avoided)". Also, from GCP documentation "Authenticating with OIDC and AD FS" https://cloud.google.com/anthos/clusters/docs/on-prem/1.6/how-to/oidc-adfs Blog post quoted above https://cloud.google.com/blog/products/identity-security/using-your-existing-identity-management-system-with-google-cloud-platform
upvoted 1 times
rr4444
2 years, 9 months ago
D sounds nice, but the user doesn't "use" the token.... that's used in the integration with Cloud Identity. So answer must be B, GCDS
upvoted 3 times
...
...
[Removed]
3 years, 11 months ago
Ans - B
upvoted 4 times
...
saurabh1805
3 years, 11 months ago
B is correct answer here.
upvoted 4 times
...

Question 44

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 44 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 44
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your company is using GSuite and has developed an application meant for internal usage on Google App Engine. You need to make sure that an external user cannot gain access to the application even when an employee's password has been compromised.
What should you do?

  • A. Enforce 2-factor authentication in GSuite for all users.
  • B. Configure Cloud Identity-Aware Proxy for the App Engine Application.
  • C. Provision user passwords using GSuite Password Sync.
  • D. Configure Cloud VPN between your private network and GCP.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
rafaelc
Highly Voted 5 years ago
A. Enforce 2-factor authentication in GSuite for all users.
upvoted 22 times
...
lolanczos
Most Recent 1 month, 1 week ago
Selected Answer: B
B is correct Cloud Identity-Aware Proxy (IAP) enforces identity-based access controls directly at the application layer, ensuring that only authenticated and authorized users can access the App Engine application. It adds an additional security layer independent of the user’s credentials, thereby protecting the application even if an employee’s password is compromised. A is not sufficient because enforcing 2FA only protects the authentication process and does not provide the granular, context-aware access control that IAP offers.
upvoted 2 times
anciaosinclinado
1 month ago
But if the user's password is compromised and there is no 2FA configured for that account, an attacker would be able to authenticate even if the application uses IAP.
upvoted 1 times
...
...
Rakesh21
2 months, 1 week ago
Selected Answer: A
Default IAP Configuration: By default, IAP requires users to be authenticated with Google accounts, but this authentication might only involve a username and password unless 2FA is specifically enforced for those accounts by the organization's security policies in Google Workspace or Cloud Identity.
upvoted 1 times
...
coompiler
5 months, 2 weeks ago
Selected Answer: B
I go with B. IAP is zero trust and context aware
upvoted 1 times
...
coompiler
5 months, 2 weeks ago
I go with B. IAP is zero trust and context aware
upvoted 1 times
...
PankajKapse
6 months, 2 weeks ago
Selected Answer: B
I also feel, it's B. As even if password is compromised, we can block based on IP ranges, geolocation, etc
upvoted 1 times
...
Oujay
9 months, 2 weeks ago
Selected Answer: B
A Cloud VPN creates a secure tunnel between your network and GCP, but it wouldn't restrict access based on individual user identities.
upvoted 2 times
...
Oujay
9 months, 2 weeks ago
2FA adds an extra layer of security, but if an external user has both the password and the second factor (e.g., a verification code), they might still gain access. So my answer is B. All external users will be blocked with the right authentication or not
upvoted 1 times
...
dbf0a72
1 year, 3 months ago
Selected Answer: A
A is the answer.
upvoted 1 times
...
raj117
1 year, 8 months ago
Right Answer is A
upvoted 2 times
...
SMB2022
1 year, 8 months ago
Correct Answer A
upvoted 2 times
...
AwesomeGCP
2 years, 6 months ago
Selected Answer: A
A is the answer.
upvoted 3 times
...
sudarchary
3 years, 2 months ago
Selected Answer: A
https://support.google.com/a/answer/175197?hl=en
upvoted 2 times
...
Jane111
3 years, 11 months ago
Shouldn't it be B. Configure Cloud Identity-Aware Proxy for the App Engine Application. identity based app access
upvoted 4 times
[Removed]
1 year, 8 months ago
I was thinking the same thing. Turns out IAP ensures security by enforcing 2FA. So at the end of the day, 2FA is the real solution. 2FA without IAP would still address the risk. IAP without 2FA might not. https://cloud.google.com/iap/docs/configuring-reauth#supported_reauthentication_methods
upvoted 2 times
...
...
desertlotus1211
4 years ago
The key is external user. Best practice is to have internal users/datacenter connect via VPN for security purpose, correct? External users will try to connect via Internet - they still cannot reach the app engine even if they have a users' password because a VPN connection is need to reach the resource. MA will work IF the external user has VPN access... But I think D is what they're looking for based on the question....
upvoted 3 times
mynk29
3 years, 1 month ago
Agree but there is no mention that external user doesnt have internal network access too. A is better option as it covers both scenarios.
upvoted 2 times
...
...
DebasishLowes
4 years ago
Ans : A. When passwords is compromised, enforcing 2 factor authentication is the best way to prevent non authorized users.
upvoted 2 times
...
soukumar369
4 years, 4 months ago
Enforcing 2-factor authentication can save an employee's password has been compromised
upvoted 2 times
...

Question 45

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 45 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 45
Topic #: 1
[All Professional Cloud Security Engineer Questions]

A large financial institution is moving its Big Data analytics to Google Cloud Platform. They want to have maximum control over the encryption process of data stored at rest in BigQuery.
What technique should the institution use?

  • A. Use Cloud Storage as a federated Data Source.
  • B. Use a Cloud Hardware Security Module (Cloud HSM).
  • C. Customer-managed encryption keys (CMEK).
  • D. Customer-supplied encryption keys (CSEK).
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Ganshank
Highly Voted 4 years, 10 months ago
CSEK is only supported in Google Cloud Storage and Compute Engine, therefore D cannot be the right answer. Ideally, it would be client-side encryption, with BigQuery providing another round of encryption of the encrypted data - https://cloud.google.com/bigquery/docs/encryption-at-rest#client_side_encryption, but since that is not one of the options, we can go with C as the next best option.
upvoted 19 times
smart123
4 years, 10 months ago
Option 'C' is correct. Option 'D' is not correct as CSEK a feature in Google Cloud Storage and Google Compute Engine only.
upvoted 5 times
...
...
Zek
Most Recent 4 months, 1 week ago
Selected Answer: C
BigQuery and BigLake tables don't support Customer-Supplied Encryption Keys (CSEK). https://cloud.google.com/bigquery/docs/customer-managed-encryption#before_you_begin
upvoted 3 times
...
SQLbox
7 months, 1 week ago
Correct answer is b
upvoted 1 times
...
crazycosmos
8 months, 2 weeks ago
Selected Answer: D
I prefer D for max control.
upvoted 1 times
...
SQLbox
8 months, 2 weeks ago
Correct answer is D D. Customer-supplied encryption keys (CSEK). Here's an explanation of why CSEK is the best choice and a brief review of the other options: Customer-supplied encryption keys (CSEK): CSEK allows the institution to manage their own encryption keys and supply these keys to Google Cloud Platform when needed. This provides maximum control over the encryption process because the institution retains possession of the encryption keys and can rotate, revoke, or replace them as desired.
upvoted 1 times
...
Ishu_awsguy
1 year, 10 months ago
Why not Cloud HSM ? Maximum control over keys
upvoted 1 times
Ishu_awsguy
1 year, 10 months ago
Sorry From HSM the keys become customer supplied encryption keys which are not supported. Ans is Customer managed encryptipn keys
upvoted 1 times
...
...
AwesomeGCP
2 years, 6 months ago
Selected Answer: C
C. Customer-managed encryption keys (CMEK).
upvoted 3 times
...
DebasishLowes
4 years ago
Ans : C
upvoted 2 times
...
Aniyadu
4 years, 3 months ago
I feel C is the right answer. if customer wants to manage the keys from on-premises then D would be correct.
upvoted 3 times
...
[Removed]
4 years, 5 months ago
Ans - C
upvoted 3 times
...
saurabh1805
4 years, 5 months ago
C is correct answer as CSEK is not available for big query.
upvoted 3 times
...
MohitA
4 years, 7 months ago
C is the right answer as CSEC is only available for CS and CE's
upvoted 1 times
...
aiwaai
4 years, 7 months ago
Correct Answer: C
upvoted 2 times
...
ArizonaClassics
4 years, 8 months ago
C is the RIGHT ONE!!! If you want to manage the key encryption keys used for your data at rest, instead of having Google manage the keys, use Cloud Key Management Service to manage your keys. This scenario is known as customer-managed encryption keys (CMEK). https://cloud.google.com/bigquery/docs/encryption-at-rest
upvoted 2 times
ArizonaClassics
4 years, 7 months ago
ALSO READ : https://cloud.google.com/bigquery/docs/customer-managed-encryption
upvoted 2 times
...
...
ranjeetpatil
4 years, 10 months ago
Ans is C. BigQuery does not support CSEK. https://cloud.google.com/security/encryption-at-rest. https://cloud.google.com/security/encryption-at-rest
upvoted 4 times
...
srinidutt
4 years, 10 months ago
I also feeel D is right
upvoted 1 times
...
xhova
5 years ago
Answer is D. For max control you don't want to store the Key with Google.
upvoted 3 times
...

Question 46

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 46 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 46
Topic #: 1
[All Professional Cloud Security Engineer Questions]

A company is deploying their application on Google Cloud Platform. Company policy requires long-term data to be stored using a solution that can automatically replicate data over at least two geographic places.
Which Storage solution are they allowed to use?

  • A. Cloud Bigtable
  • B. Cloud BigQuery
  • C. Compute Engine SSD Disk
  • D. Compute Engine Persistent Disk
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
ronron89
Highly Voted 4 years, 4 months ago
https://cloud.google.com/bigquery#:~:text=BigQuery%20transparently%20and%20automatically%20provides,charge%20and%20no%20additional%20setup.&text=BigQuery%20also%20provides%20ODBC%20and,interact%20with%20its%20powerful%20engine. Answer is B. BigQuery transparently and automatically provides highly durable, replicated storage in multiple locations and high availability with no extra charge and no additional setup. @xhova: https://cloud.google.com/bigquery-transfer/docs/locations What it mentions here is once you create a replication. YOu cannot change a location. Here the question is about high availability. synchronous replication.
upvoted 15 times
mistryminded
3 years, 4 months ago
Correct answer is B. BQ: https://cloud.google.com/bigquery-transfer/docs/locations#multi-regional-locations and https://cloud.google.com/bigquery-transfer/docs/locations#colocation_required Bigtable: https://cloud.google.com/bigtable/docs/locations PS: To people that are only commenting an answer, please provide a valid source to back your answers. This is a community driven forum and just spamming with wrong answers affects all of us.
upvoted 8 times
...
Arad
3 years, 4 months ago
Correct answer is A. B is not correct because: "BigQuery does not automatically provide a backup or replica of your data in another geographic region." https://cloud.google.com/bigquery/docs/availability
upvoted 6 times
mynk29
3 years, 1 month ago
"In either case, BigQuery automatically stores copies of your data in two different Google Cloud zones within the selected location." your link
upvoted 4 times
...
...
...
YourFriendlyNeighborhoodSpider
Most Recent 4 weeks, 1 day ago
Selected Answer: B
B. Cloud BigQuery Explanation: Cloud BigQuery is a fully managed data warehouse that automatically replicates data across multiple geographic regions to ensure high availability and durability. This aligns perfectly with the company policy requiring long-term data storage under these conditions. A. Cloud Bigtable: While this is a NoSQL database service that supports geographical replication, its design is more specific to big data workloads, and it may not align with a broad requirement for long-term data storage as specifically defined by the question.
upvoted 1 times
...
manishk39
3 months, 2 weeks ago
Selected Answer: A
Bigtable can replicate data across zones within a region and also replicate data across regions. https://cloud.google.com/bigtable/docs/replication-overview
upvoted 1 times
...
ryumoe
9 months, 3 weeks ago
Answer is D, becasue: A. Cloud Bigtable: This is a NoSQL database service, not designed for long-term data storage with automatic geographic replication. B. Cloud BigQuery: This is a data warehouse service, excellent for analyzing data, but it doesn't inherently replicate data for disaster recovery. C. Compute Engine SSD Disk: These are local disks attached to virtual machines, not designed for long-term storage or automatic replication.
upvoted 1 times
...
nccdebug
1 year, 1 month ago
BigQuery automatically stores copies of your data in two different Google Cloud zones within a single region in the selected location. https://cloud.google.com/bigquery/docs/locations
upvoted 1 times
...
adb4007
1 year, 4 months ago
In my opinion the key word is "automatic" because BigQuery and BigeTable are by default store on one zone for a piece of data (no replication) Withe BigTable replication is automatic : https://cloud.google.com/bigtable/docs/replication-overview and copy dataset on Bigquery is not automatic https://cloud.google.com/bigquery/docs/managing-datasets#copy-datasets I go to A
upvoted 1 times
...
uiuiui
1 year, 5 months ago
Selected Answer: D
this is geographic, not region, then the correct ans is D
upvoted 1 times
...
civilizador
1 year, 8 months ago
Answer is A - Cloud Bigtable. Cloud Bigtable - Replication: This page provides a detailed overview of how Cloud Bigtable uses replication to increase the availability and durability of your data. Cloud BigQuery: From the BigQuery product description, you can see that it is mainly focused on analyzing data and does not mention geographic replication of data as a feature. Compute Engine Disks: The documentation for Compute Engine Disks explains that they are zonal resources, meaning they are replicated within a single zone, but not across multiple zones or regions.
upvoted 1 times
...
megalucio
1 year, 9 months ago
Selected Answer: A
Correct one is A, as BigQuery does not provide replication but multi location storage which is different
upvoted 1 times
...
Ishu_awsguy
1 year, 10 months ago
I am drifting towards D Regional persistent disk are safe from zonal failures. The question mentions different geo places ( not regions ) . So if zone seperation is done in 1 google region and we use regional persistent disk , the data will be safe from failure. Also why would someone move their DR to BQ ? persistent disk make more sense to me
upvoted 1 times
...
Ishu_awsguy
1 year, 10 months ago
Point not to be confused , Even with BQ multi region , data s stores in different ones in 1 region not different geographic regions. The question asks " different geographic places " which means essentially seperate zone storage will work. hence answer is B ( Big query ) either single region or multi region . Both suffice
upvoted 1 times
Ishu_awsguy
1 year, 10 months ago
--- Typo correction --- Point not to be confused , Even with BQ multi region , data is stored in different zones in 1 region & not different geographic regions. The question asks " different geographic places " which means essentially separate zone storage will work. hence answer is B ( Big query ) either single region or multi region . Both suffice
upvoted 1 times
...
...
deony
1 year, 10 months ago
I think answer is B First of reason is long-term data solution, it's suitable for Cloud Storage and BigQuery Second is that BigQuery dataset is placed to multi-region that means that two or more regions.
upvoted 1 times
...
Ric350
2 years ago
The answer is definitely A. Here's why: https://cloud.google.com/bigtable/docs/replication-overview#how-it-works Replication for Cloud Bigtable lets you increase the availability and durability of your data by copying it across multiple regions or multiple zones within the same region. You can also isolate workloads by routing different types of requests to different clusters. BQ does not do cross-region replication. The blue highlighted note in the two links below clearly says the following: "Selecting a multi-region location does NOT provide cross-region replication NOR regional redundancy. Data will be stored in a single region within the geographic location." https://cloud.google.com/bigquery/docs/reliability-disaster#availability_and_durability https://cloud.google.com/bigquery/docs/locations#multi-regions
upvoted 4 times
...
sameer2803
2 years, 1 month ago
Answer is A. the below statement is from the google cloud documentation. https://cloud.google.com/bigquery/docs/reliability-disaster BigQuery does not automatically provide a backup or replica of your data in another geographic region
upvoted 3 times
...
AwesomeGCP
2 years, 6 months ago
Selected Answer: B
B. Cloud BigQuery
upvoted 1 times
...
giovy_82
2 years, 7 months ago
Selected Answer: B
I was about to select D, BUT: - the question says "long term data" -> which makes me think about BQ - the replication of persistent disk is between different ZONES, but the question says "different geo location" -> which means different regions (if you look at the zone distribution, different zones in same region are located in the same datacenter) but I still have doubt since the application data are not supposed to be stored in BQ , unless it is for analytics and so on. GCS would have been the best choice, but in absence of this, probably B is the 1st choice.
upvoted 4 times
Table2022
2 years, 5 months ago
Thank God we have you giovy_82, very good explanation.
upvoted 2 times
...
...
piyush_1982
2 years, 8 months ago
Selected Answer: A
https://cloud.google.com/bigquery/docs/availability#availability_and_durability As per the link above BigQuery does not automatically provide a backup or replica of your data in another geographic region. It only stores copies of data in two different Google Cloud zones within the selected location. Reading through the link https://cloud.google.com/bigtable/docs/replication-overview It states that the Bigtable replicates any changes to your data automatically within a region or multi-region.
upvoted 2 times
...

Question 47

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 47 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 47
Topic #: 1
[All Professional Cloud Security Engineer Questions]

A large e-retailer is moving to Google Cloud Platform with its ecommerce website. The company wants to ensure payment information is encrypted between the customer's browser and GCP when the customers checkout online.
What should they do?

  • A. Configure an SSL Certificate on an L7 Load Balancer and require encryption.
  • B. Configure an SSL Certificate on a Network TCP Load Balancer and require encryption.
  • C. Configure the firewall to allow inbound traffic on port 443, and block all other inbound traffic.
  • D. Configure the firewall to allow outbound traffic on port 443, and block all other outbound traffic.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
ESP_SAP
Highly Voted 3 years, 4 months ago
Correct Answer is (A): he type of traffic that you need your load balancer to handle is another factor in determining which load balancer to use: For HTTP and HTTPS traffic, use: External HTTP(S) Load Balancing https://cloud.google.com/load-balancing/docs/load-balancing-overview#external_versus_internal_load_balancing
upvoted 11 times
...
fandyadam
Most Recent 4 months, 3 weeks ago
Selected Answer: A
upvoted 2 times
...
pedrojorge
1 year, 2 months ago
Selected Answer: A
A is right
upvoted 2 times
...
[Removed]
3 years, 5 months ago
Ans - A
upvoted 2 times
...
CHECK666
3 years, 6 months ago
A is the answer, SSL certificate on L7 layer LoadBlanacer
upvoted 3 times
...
ArizonaClassics
3 years, 8 months ago
A is the correct one. the question is to see if you understand difference between Layer 7 vs Layer 4 protocols.
upvoted 2 times
...
smart123
3 years, 9 months ago
Option 'A' is the correct answer.
upvoted 1 times
...
srinidutt
3 years, 10 months ago
A is right
upvoted 1 times
...

Question 48

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 48 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 48
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Applications often require access to `secrets` - small pieces of sensitive data at build or run time. The administrator managing these secrets on GCP wants to keep a track of `who did what, where, and when?` within their GCP projects.
Which two log streams would provide the information that the administrator is looking for? (Choose two.)

  • A. Admin Activity logs
  • B. System Event logs
  • C. Data Access logs
  • D. VPC Flow logs
  • E. Agent logs
Show Suggested Answer Hide Answer
Suggested Answer: AC 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Ganshank
Highly Voted 3 years, 10 months ago
Agreed AC. https://cloud.google.com/secret-manager/docs/audit-logging
upvoted 13 times
...
ArizonaClassics
Most Recent 7 months, 4 weeks ago
AC: Read https://cloud.google.com/logging/docs/audit#admin-activity
upvoted 2 times
...
[Removed]
8 months, 3 weeks ago
Selected Answer: AC
A, C. https://cloud.google.com/secret-manager/docs/audit-logging#available-logs
upvoted 3 times
...
AwesomeGCP
1 year, 6 months ago
Selected Answer: AC
A. Admin Activity logs C. Data Access logs
upvoted 2 times
...
DebasishLowes
3 years, 1 month ago
Ans AC
upvoted 4 times
...
[Removed]
3 years, 5 months ago
Ans - AC
upvoted 2 times
...
CHECK666
3 years, 6 months ago
AC is the answer. Admin Access Logs and Data Access Logs
upvoted 3 times
...
smart123
3 years, 9 months ago
Yes 'A & C' are the right answers.
upvoted 2 times
...

Question 49

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 49 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 49
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are in charge of migrating a legacy application from your company datacenters to GCP before the current maintenance contract expires. You do not know what ports the application is using and no documentation is available for you to check. You want to complete the migration without putting your environment at risk.
What should you do?

  • A. Migrate the application into an isolated project using a ג€Lift & Shiftג€ approach. Enable all internal TCP traffic using VPC Firewall rules. Use VPC Flow logs to determine what traffic should be allowed for the application to work properly.
  • B. Migrate the application into an isolated project using a ג€Lift & Shiftג€ approach in a custom network. Disable all traffic within the VPC and look at the Firewall logs to determine what traffic should be allowed for the application to work properly.
  • C. Refactor the application into a micro-services architecture in a GKE cluster. Disable all traffic from outside the cluster using Firewall Rules. Use VPC Flow logs to determine what traffic should be allowed for the application to work properly.
  • D. Refactor the application into a micro-services architecture hosted in Cloud Functions in an isolated project. Disable all traffic from outside your project using Firewall Rules. Use VPC Flow logs to determine what traffic should be allowed for the application to work properly.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
rafaelc
Highly Voted 4 years, 7 months ago
A or B. Leaning towards A You have a deadline you cannot develop a new app so you have to lift and shift.
upvoted 20 times
xhova
4 years, 6 months ago
Answer is A.. You need VPC Flow Logs not "Firewall logs" stated in B
upvoted 13 times
Table2022
1 year, 11 months ago
xhova, you got it right!
upvoted 3 times
...
smart123
4 years, 3 months ago
I agree.
upvoted 2 times
...
...
mynk29
2 years, 7 months ago
Agree "Disable all traffic within the VPC and look at the Firewall logs to determine what traffic should be allowed for the application to work properly." if you disable all the VPC traffic there will be nothing to look into firewall logs.
upvoted 8 times
...
...
YourFriendlyNeighborhoodSpider
Most Recent 4 weeks, 1 day ago
Selected Answer: B
The best option to complete the migration of the legacy application without putting your environment at risk is: B. Migrate the application into an isolated project using a “Lift & Shift” approach in a custom network. Disable all traffic within the VPC and look at the Firewall logs to determine what traffic should be allowed for the application to work properly. Explanation: Disable All Traffic: By disabling all traffic initially, you can ensure that no unauthorized traffic can access the application. This setup provides a secure environment. Using Firewall Logs: This approach allows you to monitor what traffic is necessary for the application to function correctly after migration. You can analyze the Firewall logs to identify which ports and protocols are being used by the application, enabling you to refine your security configurations based on actual usage.
upvoted 1 times
...
cskhachane
7 months, 2 weeks ago
Option C:
upvoted 1 times
...
okhascorpio
7 months, 3 weeks ago
Selected Answer: A
B is not correct because Disabling all traffic within the VPC is too restrictive and hinders even initial testing. Analyzing firewall logs without any initial connectivity wouldn't be feasible.
upvoted 2 times
...
Xoxoo
1 year ago
Selected Answer: A
Option B, C, and D involve making significant architectural changes (refactoring into microservices or using Cloud Functions) and disabling traffic, which might introduce complexities and risks. These options are more suitable when you have a better understanding of the application's requirements and can make informed decisions about its architecture and network policies. In your current scenario, option A provides a safe starting point for the migration process while you gather more information about the application's behavior.
upvoted 3 times
...
ArizonaClassics
1 year ago
B. This option is similar to the first one but is more secure initially. The application is also migrated using a "Lift & Shift" approach. However, instead of enabling all internal TCP traffic, all traffic within the VPC is disabled. The Firewall logs (not exactly the most ideal tool but can give insights) are then used to determine what traffic is needed. This is more secure as it takes a deny-all-first approach.
upvoted 1 times
...
amanshin
1 year, 3 months ago
Option A is a valid approach, but it is not as secure as Option C. In Option A, the application is still exposed to the network, even if it is in an isolated project. This means that if someone were to find a vulnerability in the application, they could potentially exploit it to gain access to the application. In Option C, the application is isolated from the network by being deployed to a GKE cluster. This means that even if someone were to find a vulnerability in the application, they would not be able to exploit it to gain access to the application. Additionally, Option C is more scalable and resilient than Option A. This is because a GKE cluster can be scaled up or down as needed, and it is more resistant to failure than a single VM. Therefore, Option C is the more secure and scalable approach. However, if you are short on time, Option A may be a better option.
upvoted 2 times
...
Joanale
1 year, 5 months ago
A is a best option, remember you have the hurriest of the contract. Making microservices taking too long and have to know the detailed application architecture. Answer A.
upvoted 2 times
...
Ric350
1 year, 6 months ago
The answer is A. In real life you would NOT lift and shift an application especially not knowing the ports it uses nor any documentation. That'd be disruptive and cause an outage until you figured it out. You'd be out of a job! The question also clearly states "You want to complete the migration without putting your environment at risk!" You'd have to refactor the application in parallel and makes sense if it's a legacy application. You'd want to modernize it with microservices so it can take advantage of all cloud features. If you simply lift and shift, the legacy app cannot take advantage of cloud services so what's the point? You still have the same problems except now you've moved it from on-prem to the cloud.
upvoted 3 times
Ric350
1 year, 6 months ago
Excuse me, C is the correct answer for the reasons listed below. You try lifting and shift a company application without the proper dependencies of how it works, cause a disruption or outage until you figure it out and let me know how that works for you and if you'll still have a job.
upvoted 1 times
...
...
sameer2803
1 year, 7 months ago
Answer is B. even if you disable all traffic within VPC, the request to the application will hit the firewall and will get a deny ingress response. that way we get to know what port is It coming in. the same can be determined with allowing all traffic in (which exposes your application to the world ) but the question ends with "without putting your environment at risk"
upvoted 2 times
...
pedrojorge
1 year, 8 months ago
Selected Answer: B
B, as A temporarily opens vulnerable paths in the system.
upvoted 3 times
...
somnathmaddi
1 year, 9 months ago
Selected Answer: A
Answer is A.. You need VPC Flow Logs not "Firewall logs" stated in B
upvoted 4 times
...
Mixxer5
1 year, 10 months ago
Selected Answer: A
A since B disrupts the system. C and D are out of question if it's supposed to "just work".
upvoted 4 times
...
Meyucho
1 year, 10 months ago
Selected Answer: B
The difference between A and B is that, in the first, you allow all traffic so the app will work after migration and you can investigate which ports should be open and then take actions. If you go with B you will have a disruption window until figure out all ports needed but will not have any port unneeded port. So... if you asked to avoid disruption go with A and (as in this question) you are asked about security, go with B
upvoted 4 times
pedrojorge
1 year, 8 months ago
The question never asks to avoid disruption, it asks to avoid risk, so the answer must be B.
upvoted 2 times
...
...
AwesomeGCP
2 years ago
Selected Answer: A
A. Migrate the application into an isolated project using a "Lift & Shift" approach. Enable all internal TCP traffic using VPC Firewall rules. Use VPC Flow logs to determine what traffic should be allowed for the application to work properly.
upvoted 4 times
...
GPK
2 years, 9 months ago
These questions are no more relevant as google has changed exam and made it really challenging now.
upvoted 1 times
vicky_cyber
2 years, 9 months ago
Could you please help us with recent dumps or guide which dump to be referred
upvoted 2 times
Bwitch
2 years, 8 months ago
This one is accurate.
upvoted 2 times
...
...
...
rr4444
2 years, 10 months ago
Selected Answer: B
B - VPC Flow Logs Firewall logging only covers TCP and UDP, you explicitly don't know what the app does. That limitation is also important to the fact that implied deny all ingress and deny all egress rules are not covered by Firewall Logging. Plus you have to enable Firewall Logging per rule, so you'd have to have a rule for everything in advance - chicken and egg.... you don't know what is going on, so how could you!?
upvoted 1 times
rr4444
2 years, 10 months ago
VPC FLow logs is A! I meant A!
upvoted 2 times
...
...

Question 50

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 50 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 50
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your company has deployed an application on Compute Engine. The application is accessible by clients on port 587. You need to balance the load between the different instances running the application. The connection should be secured using TLS, and terminated by the Load Balancer.
What type of Load Balancing should you use?

  • A. Network Load Balancing
  • B. HTTP(S) Load Balancing
  • C. TCP Proxy Load Balancing
  • D. SSL Proxy Load Balancing
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
smart123
Highly Voted 3 years, 9 months ago
Although both TCP Proxy LB and SSL Proxy LB support port 587 but only SSL Proxy LB support TLS. Hence 'D' is the right answer.
upvoted 19 times
...
umashankar_a
Highly Voted 2 years, 9 months ago
Answer D https://cloud.google.com/load-balancing/docs/ssl - SSL Proxy Load Balancing is a reverse proxy load balancer that distributes SSL traffic coming from the internet to virtual machine (VM) instances in your Google Cloud VPC network. When using SSL Proxy Load Balancing for your SSL traffic, user SSL (TLS) connections are terminated at the load balancing layer, and then proxied to the closest available backend instances by using either SSL (recommended) or TCP.
upvoted 6 times
...
[Removed]
Most Recent 8 months, 3 weeks ago
Selected Answer: D
"D" Although port 587 is SMTP (mail) which is an Application Layer protocol, and one might think an Application Layer (HTTPs) Load balancer is needed, according to Google docs, Application Layer LBs offload TLS at GFE which may or may not be the LB. Only the Network Proxy LB confirms TLS offloading at LB layer. Also, as a general rule, they recommend Network Proxy LB for TLS Offloading: "..As a general rule, you'd choose an Application Load Balancer when you need a flexible feature set for your applications with HTTP(S) traffic. You'd choose a proxy Network Load Balancer to implement TLS offload.." References: https://cloud.google.com/load-balancing/docs/choosing-load-balancer#flow_chart https://cloud.google.com/load-balancing/docs/https#control-tls-termination
upvoted 2 times
...
Ishu_awsguy
10 months, 2 weeks ago
We can use an HTTPS load balancer and change the backend services port to 587 .| HTTPS load balacer will also work
upvoted 2 times
Ishu_awsguy
10 months, 2 weeks ago
accessible by client on port 587 is the power word. Agree with D
upvoted 1 times
...
...
AwesomeGCP
1 year, 6 months ago
Selected Answer: D
Answer D. SSL Proxy Load Balancing https://cloud.google.com/load-balancing/docs/ssl
upvoted 1 times
...
dtmtor
3 years ago
Answer: D
upvoted 1 times
...
DebasishLowes
3 years, 1 month ago
Ans : D
upvoted 1 times
...
[Removed]
3 years, 5 months ago
Ans - D
upvoted 1 times
...
CHECK666
3 years, 6 months ago
D is the answer. SSL Proxy LoadBalancer supports TLS.
upvoted 2 times
...
mlyu
3 years, 7 months ago
Agreed with smart123. Ans is D https://cloud.google.com/load-balancing/docs/choosing-load-balancer#flow_chart
upvoted 3 times
...

Question 51

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 51 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 51
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You want to limit the images that can be used as the source for boot disks. These images will be stored in a dedicated project.
What should you do?

  • A. Use the Organization Policy Service to create a compute.trustedimageProjects constraint on the organization level. List the trusted project as the whitelist in an allow operation.
  • B. Use the Organization Policy Service to create a compute.trustedimageProjects constraint on the organization level. List the trusted projects as the exceptions in a deny operation.
  • C. In Resource Manager, edit the project permissions for the trusted project. Add the organization as member with the role: Compute Image User.
  • D. In Resource Manager, edit the organization permissions. Add the project ID as member with the role: Compute Image User.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
DebasishLowes
Highly Voted 3 years, 6 months ago
Ans : A
upvoted 13 times
...
[Removed]
Highly Voted 3 years, 11 months ago
Ans - A https://cloud.google.com/compute/docs/images/restricting-image-access#trusted_images
upvoted 8 times
...
nccdebug
Most Recent 7 months, 3 weeks ago
Correct Answer is: A. Option B suggests listing the trusted projects as exceptions in a deny operation, which is not necessary or recommended. It's simpler and more secure to explicitly allow only the trusted project
upvoted 1 times
...
Xoxoo
1 year ago
Selected Answer: A
To limit the images that can be used as the source for boot disks and store these images in a dedicated project, you should use option A: A. Use the Organization Policy Service to create a compute.trustedimageProjects constraint on the organization level. List the trusted project as the whitelist in an allow operation. Here's why this option is appropriate: Organization-Wide Control: Creating an organization-level constraint allows you to enforce the policy organization-wide, ensuring consistent image usage across all projects within the organization. Whitelist Approach: By listing the trusted project as a whitelist in an "allow" operation, you explicitly specify which project can be trusted as the source for boot disks. This is a more secure approach because it only allows specific trusted projects. Dedicated Project: You mentioned that the images are stored in a dedicated project, and this option aligns with that requirement.
upvoted 3 times
Xoxoo
1 year ago
Option B introduces complexity by listing the trusted projects as exceptions in a "deny" operation, which can become challenging to manage as more projects are added.
upvoted 1 times
...
...
Joanale
1 year, 4 months ago
Actually the default policy is allow * and if you put a constraint it must be as "deny" rule with exceptionsPrincipals or denial conditions. So answer is B, there's no "whitelist".
upvoted 1 times
...
meh009
1 year, 10 months ago
Selected Answer: A
https://cloud.google.com/compute/docs/images/restricting-image-access#gcloud Look at the glcoud examples and it will make sense why A is correct
upvoted 3 times
...
AzureDP900
1 year, 11 months ago
A is right Use the Trusted image feature to define an organization policy that allows principals to create persistent disks only from images in specific projects.
upvoted 2 times
AzureDP900
1 year, 11 months ago
https://cloud.google.com/compute/docs/images/restricting-image-access
upvoted 1 times
...
...
AwesomeGCP
2 years ago
Selected Answer: A
Answer A. Use the Organization Policy Service to create a compute.trustedimageProjects constraint on the organization level. List the trusted project as the whitelist in an allow operation.
upvoted 2 times
...
piyush_1982
2 years, 2 months ago
To me the answer seems to be B. https://cloud.google.com/compute/docs/images/restricting-image-access By default, instances can be created from images in any project that shares images publicly or explicitly with the user. So there is an implicit allow. Option B states that we need to deny all the projects from being used as a trusted project and add "Trusted Project" as an exception to that rule.
upvoted 4 times
piyush_1982
2 years, 2 months ago
Nope, I think I am getting confused. The correct answer is A.
upvoted 1 times
...
...
simbu1299
2 years, 6 months ago
Selected Answer: A
Answer is A
upvoted 2 times
...
danielklein09
2 years, 6 months ago
Answer is B. You don’t whitelist in an allow operation. Since there is an implicit allow, the purpose of the whitelist has been defeated.
upvoted 3 times
gcpengineer
1 year, 4 months ago
implicit deny
upvoted 1 times
...
...
CHECK666
4 years ago
A is the answer. you need to allow operations.
upvoted 1 times
...
ownez
4 years, 1 month ago
I agree with B. "https://cloud.google.com/compute/docs/images/restricting-image-access"
upvoted 2 times
ownez
4 years, 1 month ago
Answer is A. "Use the Trusted image feature to define an organization policy that allows your project members to create persistent disks only from images in specific projects." "After sharing your images with other users, you can control where those users employ those resources within your organization. Set the constraints/compute.storageResourceUseRestrictions constraint to define the projects where users are permitted to use your storage resources."
upvoted 4 times
Sheeda
4 years, 1 month ago
Yes, A made sense to me too.
upvoted 1 times
...
...
...

Question 52

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 52 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 52
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your team needs to prevent users from creating projects in the organization. Only the DevOps team should be allowed to create projects on behalf of the requester.
Which two tasks should your team perform to handle this request? (Choose two.)

  • A. Remove all users from the Project Creator role at the organizational level.
  • B. Create an Organization Policy constraint, and apply it at the organizational level.
  • C. Grant the Project Editor role at the organizational level to a designated group of users.
  • D. Add a designated group of users to the Project Creator role at the organizational level.
  • E. Grant the billing account creator role to the designated DevOps team.
Show Suggested Answer Hide Answer
Suggested Answer: AD 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
mlyu
Highly Voted 4 years, 7 months ago
I think Ans is AD Because we need to stop the users can create project first (A), and allow devops team to create project (D)
upvoted 19 times
...
[Removed]
Highly Voted 4 years ago
AD is the answer. If constraint is added , no project creation will be allowed, hence B is wrong
upvoted 7 times
...
taka5094
Most Recent 7 months, 1 week ago
E. I think that the billing account creator role is needed in this case. https://cloud.google.com/resource-manager/docs/default-access-control#removing-default-roles "After you designate your own Billing Account Creator and Project Creator roles, you can remove these roles from the organization resource to restrict those permissions to specifically designated users. "
upvoted 1 times
...
[Removed]
1 year, 8 months ago
Selected Answer: AD
"A,D" seems most accurate. The following page talks about how Project Creator role is granted to all users by default, which is why "A" is necessary. And then there's a section about granting Project Creator to specific users which is where "D" comes in. https://cloud.google.com/resource-manager/docs/default-access-control#removing-default-roles
upvoted 1 times
...
AzureDP900
2 years, 5 months ago
AD is perfect. A. Remove all users from the Project Creator role at the organizational level. D. Add a designated group of users to the Project Creator role at the organizational level.
upvoted 1 times
...
AwesomeGCP
2 years, 6 months ago
Selected Answer: AD
A. Remove all users from the Project Creator role at the organizational level. D. Add a designated group of users to the Project Creator role at the organizational level. https://cloud.google.com/resource-manager/docs/organization-policy/org-policy-constraints
upvoted 3 times
AzureDP900
2 years, 5 months ago
AD is correct
upvoted 1 times
...
...
Jeanphi72
2 years, 8 months ago
Selected Answer: AD
https://cloud.google.com/resource-manager/docs/organization-policy/org-policy-constraints I see no way to restrict project creation with an organizational policy. If that would have been possible I would have voted for it as restrictions can be overriden in GCP.
upvoted 4 times
...
piyush_1982
2 years, 8 months ago
Selected Answer: AC
Seems to be AC When an organization resource is created, all users in your domain are granted the Billing Account Creator and Project Creator roles by default. As per the link https://cloud.google.com/resource-manager/docs/default-access-control#removing-default-roles Hence A is definitely the answer. Now to add the project creator we need to add the designated group to the project creator role specifically.
upvoted 1 times
...
absipat
2 years, 10 months ago
ad of course
upvoted 1 times
...
syllox
3 years, 11 months ago
Ans AC also
upvoted 1 times
syllox
3 years, 11 months ago
AD , C is a mistake it's project Editor and not creator
upvoted 3 times
...
...
DebasishLowes
4 years, 1 month ago
Ans : AD
upvoted 4 times
...
Aniyadu
4 years, 3 months ago
A & D is the right answer.
upvoted 4 times
...
[Removed]
4 years, 5 months ago
Ans - AD
upvoted 3 times
...
genesis3k
4 years, 5 months ago
I think AC. Because, a role is granted to user/group, rather user/group is added to a role.
upvoted 1 times
syllox
3 years, 11 months ago
C is a mistake it's project Editor and not creator
upvoted 1 times
...
...
CHECK666
4 years, 6 months ago
AD is the answer. There's nothing related to project creation in organization policy constraints.
upvoted 4 times
...

Question 53

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 53 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 53
Topic #: 1
[All Professional Cloud Security Engineer Questions]

A customer deployed an application on Compute Engine that takes advantage of the elastic nature of cloud computing.
How can you work with Infrastructure Operations Engineers to best ensure that Windows Compute Engine VMs are up to date with all the latest OS patches?

  • A. Build new base images when patches are available, and use a CI/CD pipeline to rebuild VMs, deploying incrementally.
  • B. Federate a Domain Controller into Compute Engine, and roll out weekly patches via Group Policy Object.
  • C. Use Deployment Manager to provision updated VMs into new serving Instance Groups (IGs).
  • D. Reboot all VMs during the weekly maintenance window and allow the StartUp Script to download the latest patches from the internet.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
genesis3k
Highly Voted 3 years, 11 months ago
Answer is A. Compute Engine doesn't automatically update the OS or the software on your deployed instances. You will need to patch or update your deployed Compute Engine instances when necessary. However, in the cloud it is not recommended that you patch or update individual running instances. Instead it is best to patch the image that was used to launch the instance and then replace each affected instance with a new copy.
upvoted 22 times
...
anciaosinclinado
Most Recent 1 month ago
Selected Answer: C
Seems this is an old question, now Deployment Manager is able to update base images: https://cloud.google.com/deployment-manager/docs/reference/latest/deployments/patch
upvoted 1 times
...
nccdebug
7 months, 3 weeks ago
VM Manager is a suite of tools that can be used to manage operating systems for large virtual machine (VM) fleets running Windows and Linux on Compute Engine. VM Manager helps drive efficiency through automation and reduces the operational burden of maintaining these VM fleets. https://cloud.google.com/compute/docs/vm-manager
upvoted 3 times
...
b6f53d8
9 months, 1 week ago
Question is outdated, Since 2020 Google has VM Manager for updating VMs (Linux and Windows)
upvoted 3 times
...
habros
11 months, 1 week ago
Selected Answer: A
A. Use a tool like HashiCorp Packer to package the VM images using CI/CD
upvoted 2 times
...
[Removed]
1 year, 2 months ago
Selected Answer: A
"A" Applying an OS level patch typically requires a reboot. Rebooting a VM that is actively serving live traffic will have a negative impact on the availability of the service and the user experience and therefore the business. Out of all the options, only option A emphasises the rolling/gradual deployment of the patch through base images. References: https://cloud.google.com/compute/docs/os-patch-management#scheduled_patching
upvoted 2 times
...
Ric350
1 year, 6 months ago
The answer is definitely D. You would build new base images or deploy new vm's because then you'd have a base OS server with no application on it. You'd have to re-install the app, configure and it as well. You'd have to find a maintenance window that allows you to patch the server, not re-build it! Even the OS patch management doc link below mentions scheduling a time or doing it on demand. You schedule prod systems and patch the dev/test/staging server on demand bc it's not production. Think practically here. D is the obvious answer.
upvoted 2 times
Ric350
1 year, 6 months ago
correction "would NOT"
upvoted 1 times
...
...
AwesomeGCP
2 years ago
Selected Answer: A
A. Build new base images when patches are available, and use a CI/CD pipeline to rebuild VMs, deploying incrementally.
upvoted 2 times
PATILDXB
1 year, 9 months ago
you cannot use CI/CD pipeline for building VMs. It is used only for code deployment. Further, building base images is only 1 time activity, organisations cannot afford to change the base image everytime when a patch is released. So, C is the answer
upvoted 1 times
gcpengineer
1 year, 4 months ago
i use ci/cd to build vm
upvoted 1 times
...
ftpt
1 year, 2 months ago
you can use CICD with terraform to create new VMs
upvoted 1 times
...
...
...
Aiffone
2 years, 4 months ago
C is obviouly the answer, MIGs help you make sure mahcines deployed are latest image if you want, what's more, its meant to be an elastic system, nothing doesthat better than MIGs.
upvoted 1 times
Jeanphi72
2 years, 2 months ago
Not sure Deployment Manager can indeed create a new MIG and can configure a new deployment of machines with latest OS but what about the existing ones? In addition how to make sure rollout will be smooth? Option A seems more realistic.
upvoted 2 times
...
...
VenkatGCP1
2 years, 9 months ago
The answer is A, we are using this in practice as a solution from Google in one of the top 5 banks for managing windows image patching.
upvoted 4 times
AzureDP900
1 year, 11 months ago
Agreed.
upvoted 1 times
...
...
lxs
2 years, 10 months ago
Selected Answer: A
Definitely it will be A. The solution must take the advantage of elasticity of compute engine, so you create a template with patched OS base and redeploy images.
upvoted 2 times
...
sc_cloud_learn
3 years, 3 months ago
Answer should be A, C talks about MIG which may not be always needed
upvoted 1 times
...
DebasishLowes
3 years, 6 months ago
Ans : A
upvoted 2 times
gu9singg
3 years, 6 months ago
this questions still valid for exam?
upvoted 1 times
umashankar_a
3 years, 3 months ago
yeah....even i'm thinking the same, as we got OS Patch Management Service now in GCP for Patching Compute machines as per requirement. https://cloud.google.com/compute/docs/os-patch-management. Not really sure on the answer.
upvoted 4 times
DuncanTu
3 years, 3 months ago
Hi May I know why C is incorrect?
upvoted 1 times
...
...
...
...
HateMicrosoft
3 years, 7 months ago
The correct anwser is C. https://cloud.google.com/deployment-manager/docs/reference/latest/deployments/patch
upvoted 1 times
...
CloudTrip
3 years, 7 months ago
Given the options here Answer D seems practical
upvoted 1 times
...
singhjoga
3 years, 9 months ago
B seems the only possible answer. Windows patches are configured using Group Policies on the Windows Domain Controller. All other windows machines should be part of the same domain.
upvoted 1 times
...
FatCharlie
3 years, 10 months ago
The answer is A. This is referring to VMs in an instance group which has built in roll out deployment of new images that can easily be integrated into a CI/CD pipeline. The people mentioning the patch management tool are considering these to be long running VMs, but that makes little sense in an instance group.
upvoted 3 times
...

Question 54

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 54 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 54
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your team needs to make sure that their backend database can only be accessed by the frontend application and no other instances on the network.
How should your team design this network?

  • A. Create an ingress firewall rule to allow access only from the application to the database using firewall tags.
  • B. Create a different subnet for the frontend application and database to ensure network isolation.
  • C. Create two VPC networks, and connect the two networks using Cloud VPN gateways to ensure network isolation.
  • D. Create two VPC networks, and connect the two networks using VPC peering to ensure network isolation.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
singhjoga
Highly Voted 3 years, 9 months ago
Although A is correct, but B would be more secure when combined with firewall rules to restrict traffic based on subnets. Ideal solution would be to use Service Account based firewall rules instead of tag based. See the below paragragraph from https://cloud.google.com/solutions/best-practices-vpc-design "However, even though it is possible to uses tags for target filtering in this manner, we recommend that you use service accounts where possible. Target tags are not access-controlled and can be changed by someone with the instanceAdmin role while VMs are in service. Service accounts are access-controlled, meaning that a specific user must be explicitly authorized to use a service account. There can only be one service account per instance, whereas there can be multiple tags. Also, service accounts assigned to a VM can only be changed when the VM is stopped"
upvoted 7 times
ThisisJohn
2 years, 10 months ago
You may be right but B doesn't mention anything about firewall rules, thus we need to assume there will be communication between both subnets
upvoted 2 times
Aiffone
2 years, 4 months ago
I'm inclined to go with A too because without firewall rules the subnets in B would ensure there is no communication at all due to default implicit rules.
upvoted 1 times
...
...
...
CHECK666
Highly Voted 4 years ago
A is the answer, use network tags.
upvoted 6 times
...
[Removed]
Most Recent 1 year, 2 months ago
Selected Answer: A
"A" The choice is between A and B. Even though subnet isolation is recommended (which would make B correct), subnet isolation alone without accompanying firewall rules does not ensure security. Only A emphasizes the use of firewall which makes it more correct than B. Reference: https://cloud.google.com/architecture/best-practices-vpc-design#target_filtering
upvoted 3 times
Portugapt
6 months, 2 weeks ago
But here the question goes into the design of the network, not the specific implementation details. For design, B makes more sense.
upvoted 1 times
...
...
AzureDP900
1 year, 11 months ago
A is correct , rest of the answers doesn't make any sence
upvoted 1 times
azureaspirant
1 year, 11 months ago
@AzureDP900: Cleared AWS Solution Architect Professional (SAP - CO1) on the last date. followed your answers. Cleared 5 GCP Certificates. Glad that you are here.
upvoted 2 times
...
...
AwesomeGCP
2 years ago
Selected Answer: A
A. Create an ingress firewall rule to allow access only from the application to the database using firewall tags.
upvoted 1 times
...
zqwiklabs
3 years, 6 months ago
A is definitely incorrect
upvoted 4 times
mistryminded
2 years, 10 months ago
This one is confusing but cannot be A because it says 'Firewall tags'. There is no such thing as firewall tags, only 'Network tags'.
upvoted 2 times
...
...
desertlotus1211
3 years, 6 months ago
Answer is D: you'd want the DB in a separate VPC. Allow vpc peering and connect the Front End's backend to the DB. Don't get confused by the question saying 'front end' Front end only means public facing...
upvoted 1 times
AzureDP900
1 year, 11 months ago
A is correct
upvoted 1 times
...
Jane111
3 years, 5 months ago
you need to read basic concepts again
upvoted 7 times
...
...
DebasishLowes
3 years, 7 months ago
Ans : A
upvoted 3 times
...
[Removed]
3 years, 11 months ago
Ans - A
upvoted 2 times
...
mlyu
4 years, 1 month ago
Agree with A
upvoted 2 times
...

Question 55

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 55 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 55
Topic #: 1
[All Professional Cloud Security Engineer Questions]

An organization receives an increasing number of phishing emails.
Which method should be used to protect employee credentials in this situation?

  • A. Multifactor Authentication
  • B. A strict password policy
  • C. Captcha on login pages
  • D. Encrypted emails
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
DebasishLowes
Highly Voted 3 years, 7 months ago
A is the answer.
upvoted 10 times
...
GHOST1985
Highly Voted 2 years ago
Selected Answer: A
https://cloud.google.com/blog/products/g-suite/protecting-you-against-phishing
upvoted 5 times
AzureDP900
1 year, 11 months ago
Agree with A
upvoted 1 times
...
...
nccdebug
Most Recent 7 months, 3 weeks ago
Ans: A. Implementing MFA helps mitigate the risk posed by phishing attacks by adding an additional barrier to unauthorized access to employee credentials.
upvoted 2 times
...
[Removed]
1 year, 2 months ago
Selected Answer: A
"A" Encrypting emails (D) does not prevent or protect against phishing. Phishing leads to attacker getting a user's password. In order to protect against the "impact" of phishing, requiring a second factor would prevent the attacker from logging in using only the password once stolen.
upvoted 3 times
...
Ric350
1 year, 6 months ago
The question is asking how to PROTECT employees credentials, NOT how to best protect against phishing. MFA does that in case a user's credentials is compromised by have 2FV. It's another defense in layer approach.
upvoted 3 times
...
Mixxer5
1 year, 10 months ago
Selected Answer: D
MFA itself doesn't really protect user's credentials from beaing leaked. It makes it harder (or nigh impossible) to log in even if they get leaked but they may still leak. Encrypting emails would be of more help, although in case of phishing email it'd be best to educate users and add some filters that will flag external emails as suspicious.
upvoted 1 times
...
AwesomeGCP
2 years ago
Selected Answer: A
A. Multifactor Authentication
upvoted 3 times
...
Deepanshd
2 years ago
Selected Answer: A
Multi-factor authentication will prevent employee credentials
upvoted 2 times
...
fanilgor
2 years, 1 month ago
Selected Answer: A
A for sure
upvoted 1 times
...
lxs
2 years, 10 months ago
Selected Answer: D
This question has been taken from the GCP book.
upvoted 4 times
...
mondigo
3 years, 10 months ago
A https://cloud.google.com/blog/products/g-suite/7-ways-admins-can-help-secure-accounts-against-phishing-g-suite
upvoted 3 times
...
ronron89
3 years, 10 months ago
https://www.duocircle.com/content/email-security-services/email-security-in-cryptography#:~:text=Customer%20Login-,Email%20Security%20In%20Cryptography%20Is%20One%20Of%20The%20Most,Measures%20To%20Prevent%20Phishing%20Attempts&text=Cybercriminals%20love%20emails%20the%20most,networks%20all%20over%20the%20world. The answer should be D.
upvoted 2 times
...
shk2011
3 years, 11 months ago
Logically if i think even if i have not read about cloud answer is A
upvoted 3 times
...
[Removed]
3 years, 11 months ago
Ans - A
upvoted 2 times
...
CHECK666
4 years ago
The answer is A. https://cloud.google.com/blog/products/identity-security/protect-users-in-your-apps-with-multi-factor-authentication
upvoted 3 times
...
Sheeda
4 years, 1 month ago
Should be A
upvoted 3 times
...

Question 56

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 56 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 56
Topic #: 1
[All Professional Cloud Security Engineer Questions]

A customer is collaborating with another company to build an application on Compute Engine. The customer is building the application tier in their GCP
Organization, and the other company is building the storage tier in a different GCP Organization. This is a 3-tier web application. Communication between portions of the application must not traverse the public internet by any means.
Which connectivity option should be implemented?

  • A. VPC peering
  • B. Cloud VPN
  • C. Cloud Interconnect
  • D. Shared VPC
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
sc_cloud_learn
Highly Voted 3 years, 9 months ago
both are GCP, should be VPC peering- Option A
upvoted 17 times
...
okhascorpio
Most Recent 1 year, 1 month ago
Selected Answer: C
Key information being "Communication between portions of the application must not traverse the public internet by any means" leaves only option "C" as a valid one, as all other options rely on the public internet for data transmission.
upvoted 1 times
Oujay
9 months, 2 weeks ago
Connects your on-premises network to GCP, not relevant for connecting two GCP organizations
upvoted 2 times
...
...
[Removed]
1 year, 3 months ago
Selected Answer: A
Vpc peering definitely
upvoted 2 times
...
[Removed]
1 year, 8 months ago
Selected Answer: A
"A" Since both are in GCP then VPC Peering makes most sense. References: https://cloud.google.com/vpc/docs/vpc-peering
upvoted 3 times
...
shayke
2 years, 6 months ago
Selected Answer: A
only a
upvoted 2 times
...
AwesomeGCP
2 years, 6 months ago
Selected Answer: A
A – Peering two VPCs does permit traffic to flow between the two shared networks, but it’s only bi-directional. Peered VPC networks remain administratively separate. Dedicated Interconnect connections enable you to connect your on-premises network … in another project, as long as they are both in the same organization. hence A
upvoted 1 times
AzureDP900
2 years, 5 months ago
Agreed, A is correct.
upvoted 1 times
...
...
DP_GCP
3 years, 11 months ago
B is not correct because if Cloud VPN is used data travels over internet and question mentions it doesnt want the data to travel through internet. https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview Cloud VPN securely connects your peer network to your Virtual Private Cloud (VPC) network through an IPsec VPN connection. Traffic traveling between the two networks is encrypted by one VPN gateway and then decrypted by the other VPN gateway. This action protects your data as it travels over the internet
upvoted 1 times
PATILDXB
2 years, 3 months ago
Cloud VPN is a private connection, and different from normal IP VPN or IPSecVPN. Cloud VPN does not ride on internet. B is correct and appropriate, as it is cheaper than VPC peering, because VPC peering incurs charges
upvoted 1 times
mikez2023
2 years, 1 month ago
Cloud VPN securely connects your peer network to your Virtual Private Cloud (VPC) network through an IPsec VPN connection. Traffic traveling between the two networks is encrypted by one VPN gateway and then decrypted by the other VPN gateway. This action protects your data as it travels over the internet. You can also connect two instances of Cloud VPN to each other.
upvoted 1 times
nccdebug
1 year, 1 month ago
Communication between portions of the application must not traverse the public internet by any means, so A is the answer
upvoted 1 times
...
...
...
...
dtmtor
4 years ago
A, different orgs
upvoted 4 times
...
DebasishLowes
4 years, 1 month ago
A is the answer.
upvoted 2 times
...
[Removed]
4 years, 5 months ago
Ans - A
upvoted 3 times
...
CHECK666
4 years, 6 months ago
A is the ansswer. use VCP Peering.
upvoted 3 times
...
Akku1614
4 years, 7 months ago
Yes it Should be VPC Peering. https://cloud.google.com/vpc/docs/vpc-peering
upvoted 3 times
...
Sheeda
4 years, 7 months ago
Should be A
upvoted 4 times
...

Question 57

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 57 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 57
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your team wants to make sure Compute Engine instances running in your production project do not have public IP addresses. The frontend application Compute
Engine instances will require public IPs. The product engineers have the Editor role to modify resources. Your team wants to enforce this requirement.
How should your team meet these requirements?

  • A. Enable Private Access on the VPC network in the production project.
  • B. Remove the Editor role and grant the Compute Admin IAM role to the engineers.
  • C. Set up an organization policy to only permit public IPs for the front-end Compute Engine instances.
  • D. Set up a VPC network with two subnets: one with public IPs and one without public IPs.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
saurabh1805
Highly Voted 3 years, 5 months ago
C is correct option here, Refer below link for more details. https://cloud.google.com/resource-manager/docs/organization-policy/org-policy-constraints#constraints-for-specific-services
upvoted 12 times
AzureDP900
1 year, 5 months ago
Yes, C is right
upvoted 2 times
...
FatCharlie
3 years, 4 months ago
More specifically, it's the "Restrict VM IP Forwarding" constraint under Compute Engine
upvoted 3 times
FatCharlie
3 years, 4 months ago
Sorry, no. It's the one under that :) "Define allowed external IPs for VM instances"
upvoted 2 times
...
...
...
[Removed]
Most Recent 8 months, 3 weeks ago
Selected Answer: C
"C" Only C addresses both concerns regarding public IP and the Editor role privileges. Applying constraints at the org level mitigates the editor privileges and provides the access restrictions desired. References: https://cloud.google.com/resource-manager/docs/organization-policy/org-policy-constraints#constraints-for-specific-services
upvoted 2 times
...
passex
1 year, 4 months ago
and how would you want to separate front-end VM's from the other using Org Policy Constraints - IMO option D make more sense
upvoted 4 times
fad3r
1 year ago
Intitally I agreed with you but after looking at the link above it does say this. This list constraint defines the set of Compute Engine VM instances that are allowed to use external IP addresses. By default, all VM instances are allowed to use external IP addresses. The allowed/denied list of VM instances must be identified by the VM instance name, in the form: projects/PROJECT_ID/zones/ZONE/instances/INSTANCE constraints/compute.vmExternalIpAccess So you can indeed choose with instances have public ips https://cloud.google.com/resource-manager/docs/organization-policy/org-policy-constraints#constraints-for-specific-services Define allowed external IPs for VM instances
upvoted 3 times
...
...
AwesomeGCP
1 year, 6 months ago
Selected Answer: C
C. Set up an organization policy to only permit public IPs for the front-end Compute Engine instances.
upvoted 4 times
fad3r
1 year ago
Sorry meant to comment this on the above post
upvoted 1 times
...
fad3r
1 year ago
Intitally I agreed with you but after looking at the link above it does say this. This list constraint defines the set of Compute Engine VM instances that are allowed to use external IP addresses. By default, all VM instances are allowed to use external IP addresses. The allowed/denied list of VM instances must be identified by the VM instance name, in the form: projects/PROJECT_ID/zones/ZONE/instances/INSTANCE constraints/compute.vmExternalIpAccess So you can indeed choose with instances have public ips https://cloud.google.com/resource-manager/docs/organization-policy/org-policy-constraints#constraints-for-specific-services Define allowed external IPs for VM instances
upvoted 2 times
...
...
bartlomiejwaw
1 year, 11 months ago
Not C - Editor role is not enough for setting up org policies
upvoted 2 times
...
DebasishLowes
3 years, 1 month ago
Ans : C
upvoted 3 times
...
[Removed]
3 years, 5 months ago
Ans - C
upvoted 4 times
...
HectorLeon2099
3 years, 6 months ago
I'll go with A
upvoted 2 times
...

Question 58

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 58 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 58
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Which two security characteristics are related to the use of VPC peering to connect two VPC networks? (Choose two.)

  • A. Central management of routes, firewalls, and VPNs for peered networks
  • B. Non-transitive peered networks; where only directly peered networks can communicate
  • C. Ability to peer networks that belong to different Google Cloud organizations
  • D. Firewall rules that can be created with a tag from one peered network to another peered network
  • E. Ability to share specific subnets across peered networks
Show Suggested Answer Hide Answer
Suggested Answer: BC 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
DebasishLowes
Highly Voted 3 years, 6 months ago
Ans : BC
upvoted 17 times
...
mlyu
Highly Voted 4 years, 1 month ago
Ans should be BC https://cloud.google.com/vpc/docs/vpc-peering#key_properties
upvoted 5 times
ownez
4 years, 1 month ago
Correct. B: "Only directly peered networks can communicate. Transitive peering is not supported." C: " You can make services available privately across different VPC networks within and across organizations."
upvoted 3 times
Mihai89
3 years, 11 months ago
Agree with BC
upvoted 1 times
...
...
MohitA
4 years, 1 month ago
agree BC
upvoted 1 times
...
...
YourFriendlyNeighborhoodSpider
Most Recent 4 weeks ago
Selected Answer: BD
C. Ability to peer networks that belong to different Google Cloud organizations This statement is not correct. VPC peering can only be established between VPCs that belong to the same Google Cloud organization, or within separate projects of the same organization, but not across different organizations without specific configurations.
upvoted 1 times
...
okhascorpio
7 months, 3 weeks ago
Selected Answer: BD
https://cloud.google.com/firewall/docs/tags-firewalls-overview
upvoted 1 times
...
okhascorpio
7 months, 3 weeks ago
Selected Answer: BD
B and D as the question specifically ask for security capabilities. C is not a security capability while D is.
upvoted 3 times
JohnDohertyDoe
3 months, 3 weeks ago
Tags do not work across peered networks. https://cloud.google.com/vpc/docs/vpc-peering#tags-service-accounts
upvoted 1 times
...
...
mackarel22
1 year, 7 months ago
Selected Answer: BC
https://cloud.google.com/vpc/docs/vpc-peering#specifications Transitive peering is not supported. So BC
upvoted 2 times
...
Meyucho
1 year, 9 months ago
Selected Answer: CE
Although B is correct, going into detail I think that non-transitivity is just true for networks joined by peering but If there is a third network connected by VPN or Interconnect there is transitivity, so I discard B and stay with C and E
upvoted 1 times
...
AzureDP900
1 year, 11 months ago
BC is right
upvoted 2 times
...
AwesomeGCP
2 years ago
Selected Answer: BC
B. Non-transitive peered networks; where only directly peered networks can communicate C. Ability to peer networks that belong to different Google Cloud Platform organizations
upvoted 3 times
...
zellck
2 years ago
Selected Answer: BC
BC is the answer.
upvoted 2 times
...
Medofree
2 years, 6 months ago
D is false because : "You cannot use a tag or service account from one peered network in the other peered network."
upvoted 1 times
...
dtmtor
3 years, 6 months ago
Answer is BC
upvoted 3 times
...
Aniyadu
3 years, 9 months ago
B&C is the right answer
upvoted 2 times
...
FatCharlie
3 years, 10 months ago
The answers marked in the question seem to be referring to _shared_ VPC capabilities.
upvoted 1 times
...
[Removed]
3 years, 11 months ago
Ans - BC
upvoted 2 times
...
CHECK666
4 years ago
BC is the answer.
upvoted 2 times
...
cipher90
4 years, 1 month ago
AD is correct "Security Characteristics"
upvoted 1 times
mte_tech34
4 years ago
No it's not. "You cannot use a tag or service account from one peered network in the other peered network." -> https://cloud.google.com/vpc/docs/vpc-peering
upvoted 2 times
...
...

Question 59

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 59 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 59
Topic #: 1
[All Professional Cloud Security Engineer Questions]

A patch for a vulnerability has been released, and a DevOps team needs to update their running containers in Google Kubernetes Engine (GKE).
How should the DevOps team accomplish this?

  • A. Use Puppet or Chef to push out the patch to the running container.
  • B. Verify that auto upgrade is enabled; if so, Google will upgrade the nodes in a GKE cluster.
  • C. Update the application code or apply a patch, build a new image, and redeploy it.
  • D. Configure containers to automatically upgrade when the base image is available in Container Registry.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
TNT87
Highly Voted 4 years, 2 months ago
https://cloud.google.com/containers/security Containers are meant to be immutable, so you deploy a new image in order to make changes. You can simplify patch management by rebuilding your images regularly, so the patch is picked up the next time a container is deployed. Get the full picture of your environment with regular image security reviews. C is better
upvoted 15 times
AzureDP900
2 years, 5 months ago
Yes, C is correct
upvoted 1 times
...
...
DebasishLowes
Highly Voted 4 years ago
Ans : C
upvoted 7 times
...
nah99
Most Recent 4 months, 3 weeks ago
Selected Answer: B
https://cloud.google.com/kubernetes-engine/docs/resources/security-patching#how_vulnerabilities_are_patched
upvoted 1 times
...
GCBC
1 year, 7 months ago
C is ans - no auto upgrade will patch
upvoted 2 times
...
[Removed]
1 year, 8 months ago
Selected Answer: C
"C" Containers are immutable and cannot be updated in place. Base image/container must be patched and then gradually introduced to live container pool. References: https://cloud.google.com/architecture/best-practices-for-operating-containers#immutability
upvoted 2 times
...
Ishu_awsguy
1 year, 10 months ago
My vote for B. This is a biog value add of GKE - inplace upgrades.
upvoted 1 times
...
Ric350
2 years ago
B is 100% the answer. Fixing some vulnerabilities requires only a control plane upgrade, performed automatically by Google on GKE, while others require both control plane and node upgrades. To keep clusters patched and hardened against vulnerabilities of all severities, we recommend using node auto-upgrade on GKE (on by default). https://cloud.google.com/kubernetes-engine/docs/resources/security-patching#how_vulnerabilities_are_patched
upvoted 2 times
...
AwesomeGCP
2 years, 6 months ago
Selected Answer: C
C. Update the application code or apply a patch, build a new image, and redeploy it.
upvoted 1 times
...
Medofree
2 years, 12 months ago
Selected Answer: C
Correct ans is C, because "DevOps team needs to update their running containers".
upvoted 2 times
...
Rhehehe
3 years, 3 months ago
Its actually B. Patching a vulnerability involves upgrading to a new GKE or Anthos version number. GKE and Anthos versions include versioned components for the operating system, Kubernetes components, and other containers that make up the Anthos platform. Fixing some vulnerabilities requires only a control plane upgrade, performed automatically by Google on GKE, while others require both control plane and node upgrades. To keep clusters patched and hardened against vulnerabilities of all severities, we recommend using node auto-upgrade on GKE (on by default). On other Anthos platforms, Google recommends upgrading your Anthos components at least monthly. Ref: https://cloud.google.com/kubernetes-engine/docs/resources/security-patching
upvoted 5 times
StanPeng
3 years, 1 month ago
The qeustion is asking about upgrading application code rather than GKE
upvoted 1 times
Ric350
2 years ago
No, the question is asking how vulnerabilities are patched! To keep clusters patched and hardened against vulnerabilities of all severities, we recommend using node auto-upgrade on GKE (on by default). https://cloud.google.com/kubernetes-engine/docs/resources/security-patching#how_vulnerabilities_are_patched
upvoted 2 times
...
...
alexm112
3 years, 2 months ago
Agreed - I think this wasn't available at the time people responded. B is correct https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-upgrades
upvoted 2 times
...
...
SuperDevops
3 years, 5 months ago
I took the test yesterday and didn't pass, NO ISSUE is from here. The questions are totally new Whizlabs it´s OK
upvoted 1 times
sriz
3 years, 4 months ago
u got questions from Whizlabs?
upvoted 2 times
...
...
Aniyadu
4 years, 3 months ago
The question asked is "team needs to update their running containers" if its was auto enabled there was no need to update manually. so my answer will be C.
upvoted 2 times
...
Kevinsayn
4 years, 4 months ago
Me voy definitivamente con la C, dado que actualizar los nodos con autoupgrade no tiene nada que ver con los contenedores, la vulnerabilidad en este caso se debe aplicar con respecto a contenedor ósea aplicación por lo que la respuesta C es la correcta.
upvoted 3 times
soukumar369
4 years, 4 months ago
Translaed : 'm definitely going with C, since updating the nodes with autoupgrade has nothing to do with the containers, the vulnerability in this case must be applied with respect to the application bone container so the C answer is correct.
upvoted 1 times
...
...
jonclem
4 years, 5 months ago
Answer B is correct as per the Video Google Kubernetes Engine (GKE) Security on Linuxacademy.
upvoted 2 times
...
[Removed]
4 years, 5 months ago
Ans - C
upvoted 3 times
...
Rantu
4 years, 6 months ago
C is the correct answer as this is the way to patch, build, re-deploy
upvoted 3 times
...
Namaste
4 years, 6 months ago
Answer is C.
upvoted 3 times
...

Question 60

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 60 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 60
Topic #: 1
[All Professional Cloud Security Engineer Questions]

A company is running their webshop on Google Kubernetes Engine and wants to analyze customer transactions in BigQuery. You need to ensure that no credit card numbers are stored in BigQuery
What should you do?

  • A. Create a BigQuery view with regular expressions matching credit card numbers to query and delete affected rows.
  • B. Use the Cloud Data Loss Prevention API to redact related infoTypes before data is ingested into BigQuery.
  • C. Leverage Security Command Center to scan for the assets of type Credit Card Number in BigQuery.
  • D. Enable Cloud Identity-Aware Proxy to filter out credit card numbers before storing the logs in BigQuery.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
saurabh1805
Highly Voted 4 years, 5 months ago
B is correct answer here.
upvoted 12 times
saurabh1805
4 years, 5 months ago
https://cloud.google.com/bigquery/docs/scan-with-dlp
upvoted 4 times
...
...
jhkkrishnan
Most Recent 8 months, 2 weeks ago
sdfdfwerrwerweewrwr
upvoted 1 times
...
pixfw1
9 months, 4 weeks ago
DLP for sure.
upvoted 1 times
...
madcloud32
12 months ago
Selected Answer: B
B is correct. got this in exam. Dump is valid. Few new came but easy ones.
upvoted 1 times
...
cloud_monk
1 year ago
Selected Answer: B
DLP is the service specifically for this task.
upvoted 1 times
...
madcloud32
1 year, 1 month ago
Selected Answer: B
B is correct. DLP
upvoted 1 times
...
[Removed]
1 year, 3 months ago
Selected Answer: B
B - you want to use dlp for that
upvoted 2 times
...
jsiror
1 year, 7 months ago
Selected Answer: B
B is the correct answer
upvoted 2 times
...
[Removed]
1 year, 8 months ago
Selected Answer: B
"B" A and C are reactive measures. D is not related to hiding sensitive information. B is the only pro-active/preventative measure specific to hiding sensitive information. https://cloud.google.com/bigquery/docs/scan-with-dlp
upvoted 2 times
...
pedrojorge
2 years, 2 months ago
Selected Answer: B
B. https://cloud.google.com/bigquery/docs/scan-with-dlp
upvoted 2 times
...
jaykumarjkd99
2 years, 3 months ago
Selected Answer: B
B is correct answer here. .
upvoted 2 times
...
AwesomeGCP
2 years, 6 months ago
Selected Answer: B
B. Use the Cloud Data Loss Prevention API to redact related infoTypes before data is ingested into BigQuery.
upvoted 3 times
...
giovy_82
2 years, 7 months ago
Selected Answer: B
How can it be D? i'll go for B, DLP is the tool to scan and find sensible data
upvoted 1 times
...
sudarchary
3 years, 2 months ago
https://cloud.google.com/bigquery/docs/scan-with-dlp
upvoted 1 times
...
sudarchary
3 years, 2 months ago
Selected Answer: B
Cloud Data Loss Prevention API allows to detect and redact or remove sensitive data before the comments or reviews are published. Cloud DLP will read information from BigQuery, Cloud Storage or Datastore and scan it for sensitive data.
upvoted 1 times
AzureDP900
2 years, 5 months ago
B is correct
upvoted 1 times
...
...
rr4444
3 years, 3 months ago
Selected Answer: B
D is silly
upvoted 1 times
...
[Removed]
3 years, 12 months ago
D is impossible. I support B
upvoted 2 times
...

Question 61

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 61 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 61
Topic #: 1
[All Professional Cloud Security Engineer Questions]

A customer wants to deploy a large number of 3-tier web applications on Compute Engine.
How should the customer ensure authenticated network separation between the different tiers of the application?

  • A. Run each tier in its own Project, and segregate using Project labels.
  • B. Run each tier with a different Service Account (SA), and use SA-based firewall rules.
  • C. Run each tier in its own subnet, and use subnet-based firewall rules.
  • D. Run each tier with its own VM tags, and use tag-based firewall rules.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
genesis3k
Highly Voted 4 years, 5 months ago
Answer is B. Keyword is 'authenticated". Reference below: "Isolate VMs using service accounts when possible" "even though it is possible to uses tags for target filtering in this manner, we recommend that you use service accounts where possible. Target tags are not access-controlled and can be changed by someone with the instanceAdmin role while VMs are in service. Service accounts are access-controlled, meaning that a specific user must be explicitly authorized to use a service account. There can only be one service account per instance, whereas there can be multiple tags. Also, service accounts assigned to a VM can only be changed when the VM is stopped." https://cloud.google.com/solutions/best-practices-vpc-design#isolate-vms-service-accounts
upvoted 32 times
Ric350
2 years ago
Thank you for this great explanation with link to documentation.
upvoted 1 times
...
gu9singg
4 years ago
document says about subnet isolation
upvoted 2 times
...
AzureDP900
2 years, 5 months ago
Agreed with you and B is right
upvoted 1 times
...
...
BPzen
Most Recent 4 months, 1 week ago
Selected Answer: B
Why B is Correct: Authenticated Separation: Service accounts are tied to IAM policies and can be used to authenticate requests between tiers. They are access-controlled and cannot be modified dynamically while a VM is running, providing stronger guarantees for isolation. Firewall Rules with Service Accounts: Google Cloud supports using service accounts as targets for firewall rules. This ensures that traffic can only flow to VMs with specific service accounts, effectively creating authenticated boundaries between tiers.
upvoted 1 times
...
BPzen
4 months, 3 weeks ago
Selected Answer: D
VM tags in Google Cloud are a flexible way to categorize and identify virtual machines (VMs) by their function or purpose, such as "frontend," "backend," or "database" for a 3-tier application. By assigning each tier its own tag and applying tag-based firewall rules, the customer can enforce network separation and restrict communication between tiers based on tags. This approach provides authenticated network segmentation by allowing or denying traffic between specific tags, ensuring that only intended communications occur between application tiers.
upvoted 1 times
...
nairj
6 months, 3 weeks ago
Ans :C the question asks for network separation. In case of B, all the tiers are still in the same subnet but are isolated using SA or tags, however, with C, you clearly are separating the network. Hence my answer is C
upvoted 1 times
...
pico
11 months ago
Selected Answer: C
why the other options are less ideal: A. Project labels: Project labels are primarily for organizational purposes and don't provide strong network isolation. B. Service Accounts: While service accounts can be used for authentication, using them alone for network separation can be complex and less effective than subnet-based rules. D. VM tags: VM tags can be used for filtering in firewall rules, but they don't inherently create network separation.
upvoted 1 times
...
ArizonaClassics
1 year, 6 months ago
Run each tier with a different Service Account (SA), and use SA-based firewall rules: Service accounts are primarily designed for authentication and authorization of service-to-service interactions. Using them for network separation is possible but is not their primary use case. D. Run each tier with its own VM tags, and use tag-based firewall rules: This is the most recommended method for multi-tier applications. VM tags are a straightforward way to identify the role or purpose of a VM (like 'web', 'app', 'database'). When VMs are tagged appropriately, tag-based firewall rules can easily control which tiers can communicate with each other. For example, firewall rules can be set so that only VMs with the 'web' tag can communicate with VMs with the 'app' tag, and so on.
upvoted 2 times
...
GCBC
1 year, 7 months ago
B - https://cloud.google.com/solutions/best-practices-vpc-design#isolate-vms-service-accounts
upvoted 2 times
...
[Removed]
1 year, 8 months ago
Selected Answer: B
"B" Keyword here is "authenticated". Service account related answer is the only option that addresses authentication. The rest are network security related. References: https://cloud.google.com/compute/docs/access/service-accounts#use-sas https://cloud.google.com/solutions/best-practices-vpc-design#isolate-vms-service-accounts
upvoted 4 times
...
riteshahir5815
2 years ago
Selected Answer: C
c is correct answer.
upvoted 2 times
...
mahi9
2 years, 1 month ago
Selected Answer: B
SA accounts
upvoted 1 times
...
AwesomeGCP
2 years, 6 months ago
Selected Answer: B
B. Run each tier with a different Service Account (SA), and use SA-based firewall rules.
upvoted 1 times
...
mynk29
3 years, 1 month ago
"As previously mentioned, you can identify the VMs on a specific subnet by applying a unique network tag or service account to those instances. This allows you to create firewall rules that only apply to the VMs in a subnet—those with the associated network tag or service account. For example, to create a firewall rule that permits all communication between VMs in the same subnet, you can use the following rule configuration on the Firewall rules page:" B is the right answer
upvoted 2 times
...
mistryminded
3 years, 4 months ago
Selected Answer: B
Answer is B - https://cloud.google.com/vpc/docs/firewalls#service-accounts-vs-tags
upvoted 2 times
...
gu9singg
4 years ago
C: is incorrect, we need to authenticate, network rules does not apply and not a recommend best practice from google
upvoted 2 times
gu9singg
4 years ago
C: is incorrect because we need to spend lot of time designing the network topology etc, google recommended practice is to use simple network design with automation in mind, so service account provides those, hence final decision goes to B
upvoted 2 times
...
gu9singg
4 years ago
Correct answer is B
upvoted 2 times
...
...
DebasishLowes
4 years ago
Ans : C
upvoted 2 times
...
singhjoga
4 years, 3 months ago
B as per best practices https://cloud.google.com/solutions/best-practices-vpc-design
upvoted 3 times
...
Fellipo
4 years, 5 months ago
B exists?
upvoted 1 times
...

Question 62

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 62 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 62
Topic #: 1
[All Professional Cloud Security Engineer Questions]

A manager wants to start retaining security event logs for 2 years while minimizing costs. You write a filter to select the appropriate log entries.
Where should you export the logs?

  • A. BigQuery datasets
  • B. Cloud Storage buckets
  • C. StackDriver logging
  • D. Cloud Pub/Sub topics
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
madcloud32
7 months ago
Selected Answer: B
B : GCS without any doubts.
upvoted 2 times
...
[Removed]
10 months ago
Selected Answer: B
B - minimizing cost
upvoted 3 times
...
[Removed]
1 year, 2 months ago
Selected Answer: B
"B" Keyword here is minimizing cost. Cloud storage is typically the most cost effective option. References: https://cloud.google.com/blog/products/storage-data-transfer/how-to-save-on-google-cloud-storage-costs
upvoted 3 times
...
shayke
1 year, 9 months ago
Selected Answer: B
B- is the cheapest optaion
upvoted 2 times
...
AzureDP900
1 year, 11 months ago
B is best for cost optimization perspective
upvoted 2 times
...
shayke
1 year, 11 months ago
Selected Answer: B
GCS would be the chipest option
upvoted 2 times
...
AwesomeGCP
2 years ago
Selected Answer: B
B. Cloud Storage buckets
upvoted 1 times
...
Deepanshd
2 years ago
Selected Answer: B
Cloud storage is always considered when minimize cost
upvoted 1 times
...
Bill1000
2 years ago
B is correct
upvoted 2 times
...
mbiy
2 years, 7 months ago
Ans C is correct, you can define a custom log bucket and mention the retention policy for any number of years (range - 1 day to 3650 days). Underlying these custom define log bucket is also created within Cloud Storage. As per the question you can retain log for 2 years in Stackdriver Logging which is aka Cloud Logging, and then later archive to cold line storage if there is a requirement.
upvoted 1 times
VJ_0909
2 years, 7 months ago
Default retention for logging is 30 days because it is expensive to hold the logs there for longer duration. Bucket is always the cheapest option.
upvoted 1 times
...
...
jayk22
2 years, 11 months ago
Ans B. Validated.
upvoted 4 times
...
DebasishLowes
3 years, 7 months ago
Ans: B
upvoted 4 times
...
[Removed]
3 years, 11 months ago
Ans - B
upvoted 1 times
...
Raushanr
4 years ago
Ans is B
upvoted 1 times
...
mlyu
4 years, 1 month ago
Ans B Cloud storage is always considered when minimize cost
upvoted 2 times
MohitA
4 years, 1 month ago
Agree B
upvoted 1 times
...
...

Question 63

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 63 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 63
Topic #: 1
[All Professional Cloud Security Engineer Questions]

For compliance reasons, an organization needs to ensure that in-scope PCI Kubernetes Pods reside on `in-scope` Nodes only. These Nodes can only contain the
`in-scope` Pods.
How should the organization achieve this objective?

  • A. Add a nodeSelector field to the pod configuration to only use the Nodes labeled inscope: true.
  • B. Create a node pool with the label inscope: true and a Pod Security Policy that only allows the Pods to run on Nodes with that label.
  • C. Place a taint on the Nodes with the label inscope: true and effect NoSchedule and a toleration to match in the Pod configuration.
  • D. Run all in-scope Pods in the namespace ג€in-scope-pciג€.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Tabayashi
Highly Voted 2 years, 5 months ago
[A] Correct answer. This is a typical use case for node selector. https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector [B] The Pod Security Policy is designed to block the creation of misconfigured pods on certain clusters. This does not meet the requirements. [C] Taint will no longer place pods without the "inscope" label on that node, but it does not guarantee that pods with the "inscope" label will be placed on that node. [D] Placing the "in scope" node in the namespace "in-scope-pci" may meet the requirement, but [A] takes precedence.
upvoted 11 times
MariaGabiGabriela
2 years, 4 months ago
I think [A] does not stop other pods from being run in the PCI node, which is a requirement as the question states... I would go with [C]
upvoted 8 times
...
AzureDP900
1 year, 11 months ago
A is correct.
upvoted 1 times
gcpengineer
1 year, 4 months ago
C is correct
upvoted 3 times
...
...
...
gcpengineer
Highly Voted 1 year, 4 months ago
Selected Answer: C
C is the ans as per chatgpt
upvoted 6 times
...
Rakesh21
Most Recent 2 months, 1 week ago
Selected Answer: C
Taints and Tolerations are used in Kubernetes to control which Pods can be scheduled on which Nodes. By applying a taint to Nodes labeled as inscope: true with the effect NoSchedule, you ensure that only Pods that can tolerate this taint can be scheduled on these Nodes. Then, by configuring the in-scope Pods with a matching toleration, you guarantee that only these Pods will land on the Nodes marked as in-scope. This method ensures both that only in-scope Pods run on these Nodes and that these Nodes are used exclusively for in-scope Pods, meeting the compliance requirement.
upvoted 1 times
...
JohnDohertyDoe
3 months, 3 weeks ago
Selected Answer: C
Using a node selector does not prevent other pods from being scheduled in the pci-scope nodes. However a taint and toleration would ensure that only the pods with the toleration can be scheduled in the pci-scope nodes.
upvoted 1 times
...
pico
4 months, 3 weeks ago
Selected Answer: C
why the other options are less suitable: A. nodeSelector: While nodeSelector can help target pods to specific nodes, it doesn't prevent other pods from being scheduled on those nodes if they fit the node's resources. B. Node pool and Pod Security Policy: Pod Security Policies are deprecated in newer Kubernetes versions, and node pools alone won't guarantee the required isolation. D. Namespace: Namespaces provide logical separation but don't inherently enforce node-level restrictions.
upvoted 1 times
...
rsamant
10 months, 2 weeks ago
A https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/
upvoted 1 times
...
ArizonaClassics
1 year ago
C. Place a taint on the Nodes with the label inscope: true and effect NoSchedule and a toleration to match in the Pod configuration: This is the best solution. Taints and tolerations work together to ensure that Pods are not scheduled onto inappropriate nodes. By placing a taint on the Nodes, you are essentially marking them so that they repel all Pods that don't have a matching toleration. With this method, only Pods with the correct toleration can be scheduled on in-scope Nodes, ensuring compliance.
upvoted 2 times
...
Meyucho
1 year, 9 months ago
Selected Answer: C
A nodeselector configuration is from a pod template perspective. This question ask to PRESERVE some nodes for specific pods, so this is the main utilization for TAINT. This is a conceptual question and the answer is C
upvoted 4 times
...
AwesomeGCP
2 years ago
Selected Answer: A
A. Add a nodeSelector field to the pod configuration to only use the Nodes labeled inscope: true.
upvoted 3 times
...
GHOST1985
2 years ago
Selected Answer: A
nodeSelector is the simplest recommended form of node selection constraint. You can add the nodeSelector field to your Pod specification and specify the node labels you want the target node to have. Kubernetes only schedules the Pod onto nodes that have each of the labels you specify. => https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector Tolerations are applied to pods. Tolerations allow the scheduler to schedule pods with matching taints. Tolerations allow scheduling but don't guarantee scheduling: the scheduler also evaluates other parameters as part of its function. => https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
upvoted 3 times
...
fanilgor
2 years, 1 month ago
Selected Answer: C
Basic K8s principles of scheduling workloads. Taints and tolerations make perfect sense for this use case. Therefore C.
upvoted 2 times
...
Jeanphi72
2 years, 1 month ago
Selected Answer: A
https://redhat-scholars.github.io/kubernetes-tutorial/kubernetes-tutorial/taints-affinity.html A Taint is applied to a Kubernetes Node that signals the scheduler to avoid or not schedule certain Pods. A Toleration is applied to a Pod definition and provides an exception to the taint. https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ Node affinity is a property of Pods that attracts them to a set of nodes (either as a preference or a **hard requirement**). Taints are the opposite -- they allow a node to repel a set of pods.
upvoted 3 times
...
hybridpro
2 years, 3 months ago
Answer should be C. "These Nodes can only contain the ג€in-scopeג€ Pods." - this can only be achieved by taints and tolerations.
upvoted 1 times
...

Question 64

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 64 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 64
Topic #: 1
[All Professional Cloud Security Engineer Questions]

In an effort for your company messaging app to comply with FIPS 140-2, a decision was made to use GCP compute and network services. The messaging app architecture includes a Managed Instance Group (MIG) that controls a cluster of Compute Engine instances. The instances use Local SSDs for data caching and
UDP for instance-to-instance communications. The app development team is willing to make any changes necessary to comply with the standard
Which options should you recommend to meet the requirements?

  • A. Encrypt all cache storage and VM-to-VM communication using the BoringCrypto module.
  • B. Set Disk Encryption on the Instance Template used by the MIG to customer-managed key and use BoringSSL for all data transit between instances.
  • C. Change the app instance-to-instance communications from UDP to TCP and enable BoringSSL on clients' TLS connections.
  • D. Set Disk Encryption on the Instance Template used by the MIG to Google-managed Key and use BoringSSL library on all instance-to-instance communications.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
subhala
Highly Voted 4 years, 4 months ago
when I revisited this, Now I think A is correct. In A - We will use an approved encryption method for encrypting Local SSD and VM to VM communication. In B and D, we are still using GCP's encryption algorithms and are not FIPS 140-2 approved. Moreover only the BoringCrypto is FIPS 140-2 approved and not the Boring SSL. I see A as evidently correct. ownez, genesis3k, MohitA has explained this and provided the right links too.
upvoted 16 times
...
Rakesh21
Most Recent 2 months, 1 week ago
Selected Answer: B
Disk Encryption with customer-managed keys: FIPS 140-2 compliance often requires encryption, and using customer-managed encryption keys (CMEK) ensures that you have control over the encryption keys, which can be crucial for compliance. Google Cloud supports FIPS 140-2 compliant encryption for data at rest with customer-managed keys. BoringSSL for data transit: BoringSSL is Google's fork of OpenSSL, designed to meet high standards of cryptographic security, including FIPS 140-2. Using BoringSSL for instance-to-instance communications ensures that data in transit is encrypted according to the necessary standards. Although UDP isn't inherently encrypted, you can implement encryption at the application layer using libraries like BoringSSL.
upvoted 1 times
...
p981pa123
2 months, 2 weeks ago
Selected Answer: A
"BoringSSL as a whole is not FIPS validated. However, there is a core library (called BoringCrypto) that has been FIPS validated."
upvoted 2 times
...
p981pa123
2 months, 3 weeks ago
Selected Answer: B
When you deploy Managed Instance Groups (MIGs), you typically create an instance template that defines the configuration of instances in the group, including the disk encryption settings.
upvoted 1 times
p981pa123
2 months, 2 weeks ago
I made mistake . Answer is A. "BoringSSL as a whole is not FIPS validated. However, there is a core library (called BoringCrypto) that has been FIPS validated." https://boringssl.googlesource.com/boringssl/+/master/crypto/fipsmodule/FIPS.md
upvoted 1 times
...
...
SQLbox
6 months, 4 weeks ago
B To comply with FIPS 140-2, the company needs to ensure that both data at rest and data in transit are encrypted using cryptographic libraries that are FIPS 140-2 certified. • Customer-managed keys (CMEK): Using customer-managed encryption keys (CMEK) in Google Cloud Key Management Service (KMS) ensures that encryption complies with FIPS 140-2 standards because the customer has control over the encryption keys and can ensure they are managed according to compliance requirements. • BoringSSL: A Google-maintained version of OpenSSL designed to be more streamlined and used in environments like Google Cloud, which includes support for FIPS 140-2 mode when linked to the BoringCrypto module. This library can be used to ensure that data in transit between instances is encrypted in compliance with FIPS.
upvoted 1 times
...
LaithTech
8 months, 1 week ago
Selected Answer: B
The correct answer is B
upvoted 1 times
...
3d9563b
8 months, 3 weeks ago
Selected Answer: B
A. Encrypt all cache storage and VM-to-VM communication using the BoringCrypto module: BoringCrypto is not an established or widely recognized cryptographic library for FIPS 140-2 compliance. Instead, BoringSSL or OpenSSL with FIPS validation should be used for both data-at-rest and data-in-transit encryption. C. Change the app instance-to-instance communications from UDP to TCP and enable BoringSSL on clients' TLS connections: While changing from UDP to TCP might provide more reliable connections, it does not directly address FIPS 140-2 compliance. You still need to ensure that all data-in-transit encryption uses a validated cryptographic module such as BoringSSL. D. Set Disk Encryption on the Instance Template used by the MIG to Google-managed Key and use BoringSSL library on all instance-to-instance communications: Google-managed keys for disk encryption do not provide the level of control required for FIPS 140-2 compliance, which typically requires customer-managed keys for greater control and accountability.
upvoted 1 times
...
gical
1 year, 3 months ago
Selected answer B https://cloud.google.com/security/compliance/fips-140-2-validated/ "Google’s Local SSD storage product is automatically encrypted with NIST approved ciphers, but Google's current implementation for this product doesn’t have a FIPS 140-2 validation certificate. If you require FIPS-validated encryption on Local SSD storage, you must provide your own encryption with a FIPS-validated cryptographic module."
upvoted 4 times
b6f53d8
1 year, 3 months ago
YES, as in your link: you need to encrypt SSD using your own solution, and BoringSSL is a library to use
upvoted 1 times
...
...
ArizonaClassics
1 year, 6 months ago
A. Encrypt all cache storage and VM-to-VM communication using the BoringCrypto module. This option ensures both storage (Local SSDs) and inter-instance communications are encrypted using a FIPS 140-2 compliant module.
upvoted 4 times
...
ArizonaClassics
1 year, 6 months ago
A. Encrypt all cache storage and VM-to-VM communication using the BoringCrypto module. This option ensures both storage (Local SSDs) and inter-instance communications are encrypted using a FIPS 140-2 compliant module.
upvoted 1 times
...
ymkk
1 year, 7 months ago
Selected Answer: A
https://cloud.google.com/security/compliance/fips-140-2-validated/
upvoted 2 times
...
gcpengineer
1 year, 11 months ago
Selected Answer: A
A is the ans
upvoted 2 times
...
pedrojorge
2 years, 2 months ago
Selected Answer: C
"BoringSSL as a whole is not FIPS validated. However, there is a core library (called BoringCrypto) that has been FIPS validated" https://boringssl.googlesource.com/boringssl/+/master/crypto/fipsmodule/FIPS.md
upvoted 3 times
...
AzureDP900
2 years, 5 months ago
https://cloud.google.com/docs/security/key-management-deep-dive A is right
upvoted 1 times
...
AwesomeGCP
2 years, 6 months ago
Selected Answer: A
A. Encrypt all cache storage and VM-to-VM communication using the BoringCrypto module.
upvoted 1 times
...
sudarchary
3 years, 2 months ago
Selected Answer: A
FIPS140 module is supported
upvoted 2 times
...
[Removed]
3 years, 12 months ago
D is the correct answer
upvoted 2 times
...

Question 65

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 65 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 65
Topic #: 1
[All Professional Cloud Security Engineer Questions]

A customer has an analytics workload running on Compute Engine that should have limited internet access.
Your team created an egress firewall rule to deny (priority 1000) all traffic to the internet.
The Compute Engine instances now need to reach out to the public repository to get security updates.
What should your team do?

  • A. Create an egress firewall rule to allow traffic to the CIDR range of the repository with a priority greater than 1000.
  • B. Create an egress firewall rule to allow traffic to the CIDR range of the repository with a priority less than 1000.
  • C. Create an egress firewall rule to allow traffic to the hostname of the repository with a priority greater than 1000.
  • D. Create an egress firewall rule to allow traffic to the hostname of the repository with a priority less than 1000.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
dtmtor
Highly Voted 4 years ago
Answer is B. Lower number is higher priority and dest is only IP ranges in firewall rules
upvoted 26 times
...
[Removed]
Highly Voted 1 year, 3 months ago
Selected Answer: B
B… no hostname in firewall rules and lower number = higher priority.
upvoted 5 times
...
BPzen
Most Recent 4 months, 1 week ago
Selected Answer: B
While the priority is correct, Google Cloud firewall rules do not support hostname-based filtering. You must use a CIDR range.
upvoted 1 times
...
madcloud32
1 year, 1 month ago
Selected Answer: B
B is correct.
upvoted 1 times
...
shayke
2 years, 3 months ago
Selected Answer: B
Ans in B lower number higher priority
upvoted 3 times
...
Littleivy
2 years, 4 months ago
Selected Answer: B
Answer is B
upvoted 3 times
...
GHOST1985
2 years, 5 months ago
Selected Answer: B
https://cloud.google.com/vpc/docs/firewalls#priority_order_for_firewall_rules
upvoted 4 times
...
AzureDP900
2 years, 5 months ago
B is correct
upvoted 2 times
...
Premumar
2 years, 5 months ago
Selected Answer: B
First filter is priority should be less than 1000. So, option A and C are rejected. Then, we use CIDR range to allow firewall. So, the final answer is B.
upvoted 3 times
...
AwesomeGCP
2 years, 6 months ago
Selected Answer: B
B. Create an egress firewall rule to allow traffic to the CIDR range of the repository with a priority less than 1000. Firewall rules only support IPv4 connections. When specifying a source for an ingress rule or a destination for an egress rule by address, you can only use an IPv4 address or IPv4 block in CIDR notation. So Answer is B
upvoted 4 times
...
piyush_1982
2 years, 8 months ago
Selected Answer: A
The correct answer is A. As per the link https://cloud.google.com/vpc/docs/firewalls#rule_assignment Lowest priority in the firewall rule is 65535. So in order for a rule to be of higher priority than 1000 the rule should have a priority of number less than 1000.
upvoted 2 times
Premumar
2 years, 5 months ago
Your explanation is correct. But, option you selected is wrong. It has to be option B.
upvoted 3 times
...
...
Rithac
3 years, 9 months ago
I think I am confusing myself by overthinking the wording of this question. I know the answer is A or B since "using hostname is not one of the options in firewall egress rule destination" I also know that "The firewall rule priority is an integer from 0 to 65535, inclusive. Lower integers indicate higher priorities." I know that I could resolve this by setting TCP port 80 rule to a priority of 500 (smaller number, but higher priority) and be done. Where i'm second guessing myself, is Google referring to the integer or strictly priority? If integer then i'd choose B "priority less than 1000 (smaller number)", if priority then i'd choose A "priority greater than 1000" (still the lower number). Have I thoroughly confused this question? I"m leaning toward the answer being "A:
upvoted 5 times
...
DebasishLowes
4 years ago
Ans : B
upvoted 3 times
...
ronron89
4 years, 4 months ago
Answer: B https://cloud.google.com/vpc/docs/firewalls#rule_assignment The priority of the second rule determines whether TCP traffic to port 80 is allowed for the webserver targets: If the priority of the second rule is set to a number greater than 1000, it has a lower priority, so the first rule denying all traffic applies. If the priority of the second rule is set to 1000, the two rules have identical priorities, so the first rule denying all traffic applies. If the priority of the second rule is set to a number less than 1000, it has a higher priority, thus allowing traffic on TCP 80 for the webserver targets. Absent other rules, the first rule would still deny other types of traffic to the webserver targets, and it would also deny all traffic, including TCP 80, to instances without the webserver tag.
upvoted 4 times
...
[Removed]
4 years, 5 months ago
Ans - B
upvoted 3 times
...
Raushanr
4 years, 6 months ago
The firewall rule priority is an integer from 0 to 65535, inclusive. Lower integers indicate higher priorities. If you do not specify a priority when creating a rule, it is assigned a priority of 1000.
upvoted 1 times
...
Raushanr
4 years, 6 months ago
Answer-B
upvoted 4 times
...

Question 66

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 66 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 66
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You want data on Compute Engine disks to be encrypted at rest with keys managed by Cloud Key Management Service (KMS). Cloud Identity and Access
Management (IAM) permissions to these keys must be managed in a grouped way because the permissions should be the same for all keys.
What should you do?

  • A. Create a single KeyRing for all persistent disks and all Keys in this KeyRing. Manage the IAM permissions at the Key level.
  • B. Create a single KeyRing for all persistent disks and all Keys in this KeyRing. Manage the IAM permissions at the KeyRing level.
  • C. Create a KeyRing per persistent disk, with each KeyRing containing a single Key. Manage the IAM permissions at the Key level.
  • D. Create a KeyRing per persistent disk, with each KeyRing containing a single Key. Manage the IAM permissions at the KeyRing level.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
TNT87
Highly Voted 3 years, 8 months ago
Ans B https://cloud.netapp.com/blog/gcp-cvo-blg-how-to-use-google-cloud-encryption-with-a-persistent-disk
upvoted 15 times
...
[Removed]
Most Recent 10 months ago
Selected Answer: B
B… question states permissions should be the same for all keys.
upvoted 2 times
[Removed]
10 months ago
and should be managed in a group way.
upvoted 1 times
...
...
ArizonaClassics
1 year ago
B. Create a single KeyRing for all persistent disks and all Keys in this KeyRing. Manage the IAM permissions at the KeyRing level: This is efficient. By managing permissions at the KeyRing level, you're effectively grouping permissions for all keys in that KeyRing. As permissions should be the same for all keys, this is a logical choice.
upvoted 2 times
...
AzureDP900
1 year, 11 months ago
B is right
upvoted 1 times
...
shayke
1 year, 11 months ago
Selected Answer: B
all permission are the same-controled at the ring level
upvoted 2 times
...
AwesomeGCP
2 years ago
Selected Answer: B
B. Create a single KeyRing for all persistent disks and all Keys in this KeyRing. Manage the IAM permissions at the KeyRing level.
upvoted 3 times
...
roatest27
2 years, 6 months ago
Answer-B
upvoted 1 times
...
[Removed]
3 years, 6 months ago
How about A?
upvoted 1 times
[Removed]
3 years, 6 months ago
oh, the same permission ,then I choose B
upvoted 4 times
...
...
DebasishLowes
3 years, 6 months ago
Ans : B
upvoted 3 times
...
[Removed]
3 years, 11 months ago
Ans - B
upvoted 1 times
...
Raushanr
4 years ago
Answer-B
upvoted 1 times
...
Namaste
4 years ago
B is the right answer
upvoted 1 times
...
MohitA
4 years, 1 month ago
B should be the answer
upvoted 4 times
...

Question 67

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 67 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 67
Topic #: 1
[All Professional Cloud Security Engineer Questions]

A company is backing up application logs to a Cloud Storage bucket shared with both analysts and the administrator. Analysts should only have access to logs that do not contain any personally identifiable information (PII). Log files containing PII should be stored in another bucket that is only accessible by the administrator.
What should you do?

  • A. Use Cloud Pub/Sub and Cloud Functions to trigger a Data Loss Prevention scan every time a file is uploaded to the shared bucket. If the scan detects PII, have the function move into a Cloud Storage bucket only accessible by the administrator.
  • B. Upload the logs to both the shared bucket and the bucket only accessible by the administrator. Create a job trigger using the Cloud Data Loss Prevention API. Configure the trigger to delete any files from the shared bucket that contain PII.
  • C. On the bucket shared with both the analysts and the administrator, configure Object Lifecycle Management to delete objects that contain any PII.
  • D. On the bucket shared with both the analysts and the administrator, configure a Cloud Storage Trigger that is only triggered when PII data is uploaded. Use Cloud Functions to capture the trigger and delete such files.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
MohitA
Highly Voted 3 years, 7 months ago
A is the ans
upvoted 17 times
...
talktolanka
Highly Voted 3 years ago
Answer A https://codelabs.developers.google.com/codelabs/cloud-storage-dlp-functions#0 https://www.youtube.com/watch?v=0TmO1f-Ox40
upvoted 8 times
...
Learn2fail
Most Recent 6 months, 2 weeks ago
Selected Answer: A
A is answer
upvoted 2 times
...
AzureDP900
1 year, 5 months ago
A is right
upvoted 2 times
...
AwesomeGCP
1 year, 6 months ago
Selected Answer: A
A. Use Cloud Pub/Sub and Cloud Functions to trigger a Data Loss Prevention scan every time a file is uploaded to the shared bucket. If the scan detects PII, have the function move into a Cloud Storage bucket only accessible by theadministrator.
upvoted 4 times
...
[Removed]
1 year, 7 months ago
Selected Answer: A
A it is.
upvoted 2 times
...
[Removed]
2 years, 12 months ago
I also choose A.
upvoted 3 times
...
DebasishLowes
3 years ago
Ans : A
upvoted 2 times
...
soukumar369
3 years, 3 months ago
Correct answer is A : Data Loss Prevention scan
upvoted 2 times
...
soukumar369
3 years, 3 months ago
A is correct.
upvoted 1 times
...
[Removed]
3 years, 5 months ago
Ans - A
upvoted 1 times
...
genesis3k
3 years, 5 months ago
Answer is A.
upvoted 1 times
...
passtest100
3 years, 6 months ago
SHOULD BE A
upvoted 1 times
...

Question 68

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 68 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 68
Topic #: 1
[All Professional Cloud Security Engineer Questions]

A customer terminates an engineer and needs to make sure the engineer's Google account is automatically deprovisioned.
What should the customer do?

  • A. Use the Cloud SDK with their directory service to remove their IAM permissions in Cloud Identity.
  • B. Use the Cloud SDK with their directory service to provision and deprovision users from Cloud Identity.
  • C. Configure Cloud Directory Sync with their directory service to provision and deprovision users from Cloud Identity.
  • D. Configure Cloud Directory Sync with their directory service to remove their IAM permissions in Cloud Identity.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
[Removed]
Highly Voted 3 years, 11 months ago
Ans - C
upvoted 7 times
...
MohitA
Highly Voted 4 years, 1 month ago
C is the Answer
upvoted 7 times
ownez
4 years, 1 month ago
Agree with C. "https://cloud.google.com/identity/solutions/automate-user-provisioning#cloud_identity_automated_provisioning" "Cloud Identity has a catalog of automated provisioning connectors, which act as a bridge between Cloud Identity and third-party cloud apps."
upvoted 11 times
AzureDP900
1 year, 11 months ago
Agree with C, there is no need of cloud SDK.
upvoted 2 times
AzureDP900
1 year, 11 months ago
C. Configure Cloud Directory Sync with their directory service to provision and deprovision users from Cloud Identity.
upvoted 1 times
...
...
mynk29
2 years, 7 months ago
This option is for Cloud identity to third party app- you configure directory sync between AD and cloud identity.
upvoted 2 times
...
...
...
pradoUA
Most Recent 1 year ago
Selected Answer: C
C is correct
upvoted 2 times
...
AzureDP900
1 year, 11 months ago
C. Configure Cloud Directory Sync with their directory service to provision and deprovision users from Cloud Identity.
upvoted 1 times
...
AwesomeGCP
2 years ago
Selected Answer: C
C. Configure Cloud Directory Sync with their directory service to provision and deprovision users from Cloud Identity.
upvoted 2 times
...
piyush_1982
2 years, 2 months ago
Selected Answer: C
Definitely C
upvoted 2 times
...
mynk29
2 years, 7 months ago
I don't think C is right answer. You configure Directory Sync to Sync from AD to cloud identity not the other way round. Once a user is terminated- its account should be disabled on Directory and cloud identity will pick up via IAM. D looks more correct to me.
upvoted 2 times
AkbarM
2 years ago
I also support D. The question may be provision and deprovision users. but technically it is to remove their IAM permissions in Cloud Identity. There is nothing like provision / deprovision user from cloud identity.
upvoted 1 times
rohan0411
9 months, 2 weeks ago
C is correct, because You cannot control IAM from Cloud Identity. Cloud identity only manages users and groups. It cannot remove IAM permissions through Cloud Identity.
upvoted 1 times
...
...
...
DebasishLowes
3 years, 7 months ago
Ans is C
upvoted 3 times
...

Question 69

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 69 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 69
Topic #: 1
[All Professional Cloud Security Engineer Questions]

An organization is evaluating the use of Google Cloud Platform (GCP) for certain IT workloads. A well-established directory service is used to manage user identities and lifecycle management. This directory service must continue for the organization to use as the `source of truth` directory for identities.
Which solution meets the organization's requirements?

  • A. Google Cloud Directory Sync (GCDS)
  • B. Cloud Identity
  • C. Security Assertion Markup Language (SAML)
  • D. Pub/Sub
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
desertlotus1211
Highly Voted 3 years ago
The answer is A: With Google Cloud Directory Sync (GCDS), you can synchronize the data in your Google Account with your Microsoft Active Directory or LDAP server. GCDS doesn't migrate any content (such as email messages, calendar events, or files) to your Google Account. You use GCDS to synchronize your Google users, groups, and shared contacts to match the information in your LDAP server. The questions says the well established directory service is the 'source of truth' not GCP... So LDAP or AD is the source... GCDS will sync that to match those, not replace them...
upvoted 17 times
AzureDP900
1 year, 5 months ago
Agreed
upvoted 2 times
...
...
subhala
Highly Voted 3 years, 4 months ago
GCDS -? It helps sync up from the source of truth (any IdP like ldap, AD) to Google identity. In this scenario, the question is what can be a good identity service by itself, hence B is the right answer.
upvoted 12 times
desertlotus1211
7 months, 2 weeks ago
The question inplies the company has a directory as the soruce of truth and want to maintain that in GCP... GCDS will make sure that occurs too Cloud Identity. It's not askling for a replacement of LDAP/AD.
upvoted 2 times
...
...
ArizonaClassics
Most Recent 6 months, 4 weeks ago
Google Cloud Directory Sync (GCDS): GCDS is a tool used to synchronize your Google Workspace user data with your Microsoft Active Directory or other LDAP servers. This would ensure that Google Workspace has the same user data as your existing directory, but it doesn't act as an identity provider (IDP). BUT C. Security Assertion Markup Language (SAML): SAML is an open standard for exchanging authentication and authorization data between an identity provider (your organization's existing directory service) and a service provider (like GCP). With SAML, GCP can rely on your existing directory service for authentication, and your existing directory remains the "source of truth."
upvoted 2 times
...
PST21
1 year, 3 months ago
Orgn is evaluating GC so cloud Identity is the GC product hence B
upvoted 1 times
...
AwesomeGCP
1 year, 6 months ago
Selected Answer: A
A. Google Cloud Directory Sync (GCDS)
upvoted 4 times
...
cloudprincipal
1 year, 10 months ago
Selected Answer: A
With Google Cloud Directory Sync (GCDS), you can synchronize the data in your Google Account with your Microsoft Active Directory or LDAP server. GCDS doesn't migrate any content (such as email messages, calendar events, or files) to your Google Account. You use GCDS to synchronize your Google users, groups, and shared contacts to match the information in your LDAP server. https://support.google.com/a/answer/106368?hl=en
upvoted 3 times
...
szl0144
1 year, 10 months ago
B should be the answer, GCDS is for ad sync.
upvoted 2 times
MariaGabiGabriela
1 year, 10 months ago
Yes, but identity by itself will solve nothing, the user would have to recreate all users and thus have a different IDP, this clearly goes against the question
upvoted 2 times
...
...
Bill831231
2 years, 4 months ago
seems there is nothing metioned about what they have on premise, so B is better
upvoted 1 times
...
syllox
2 years, 11 months ago
Answer A
upvoted 3 times
...
WakandaF
2 years, 11 months ago
A or B?
upvoted 2 times
...
DebasishLowes
3 years, 1 month ago
Ans : B as per the question.
upvoted 1 times
...
asee
3 years, 1 month ago
My Answer will go for A (GCDS), noticed the question is mentioning about "A directory service 'is used' " / "must continue" instead of "A directory service 'will be used' ". So here my understanding is the organization has already using their own directory service. Therefore Answer B - Cloud identity may not be an option.
upvoted 4 times
...
KWatHK
3 years, 2 months ago
Ans is B because the questions said "the well-established directory must continue for the orgnanization to use as the source of truth" so that the user access to GCP must authenticated by the existing directory. Cloud Identity support to federate it to 3rd party/ADFS using SAML.
upvoted 1 times
...
mikelabs
3 years, 4 months ago
GCDS is an app to sync users, groups and other features from AD to Cloud Identity. But, in this question, the customer needs to know what's the product on GCP that meet with this. So, I thiink the answer is B.
upvoted 8 times
...
[Removed]
3 years, 5 months ago
Ans - A
upvoted 3 times
...
ownez
3 years, 7 months ago
GCDS is a part of CI's feature that synchronizes the data in Google domain to match with AD/LDAP server. This includes users, groups contacts etc are synchronized/migrated to match. Hence, I would go B. "https://se-cloud-experts.com/wp/wp-content/themes/se-it/images/pdf/google-cloud-identity-services.pdf"
upvoted 3 times
ownez
3 years, 6 months ago
Sorry. It's A.
upvoted 2 times
...
...
bogdant
3 years, 7 months ago
Isn't it A?
upvoted 2 times
MohitA
3 years, 7 months ago
Agree A
upvoted 4 times
...
Sheeda
3 years, 7 months ago
That is used to sync, not the directly itself
upvoted 1 times
Fellipo
3 years, 5 months ago
A well-established directory service , so "A"
upvoted 2 times
...
...
...

Question 70

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 70 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 70
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Which international compliance standard provides guidelines for information security controls applicable to the provision and use of cloud services?

  • A. ISO 27001
  • B. ISO 27002
  • C. ISO 27017
  • D. ISO 27018
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
asee
Highly Voted 3 years, 1 month ago
Yes, My answer also goes to C and my last compliance related project is also working on ISO27017 in order to extend the scope to Cloud service user/provider.
upvoted 11 times
AzureDP900
1 year, 5 months ago
C is right
upvoted 1 times
AzureDP900
1 year, 5 months ago
https://cloud.google.com/security/compliance/iso-27017
upvoted 2 times
...
...
...
pradoUA
Most Recent 6 months, 1 week ago
Selected Answer: C
C. ISO 27017
upvoted 2 times
...
AwesomeGCP
1 year, 6 months ago
Selected Answer: C
C. ISO 27017
upvoted 4 times
...
certificationjjmmm
1 year, 8 months ago
C is correct. https://cloud.google.com/security/compliance/iso-27017
upvoted 3 times
...
[Removed]
3 years, 5 months ago
Ans - C
upvoted 3 times
...
Namaste
3 years, 6 months ago
CCSP Question...C is the Answer
upvoted 3 times
...
ownez
3 years, 7 months ago
C is correct. "https://www.iso.org/standard/43757.html"
upvoted 4 times
...

Question 71

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 71 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 71
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You will create a new Service Account that should be able to list the Compute Engine instances in the project. You want to follow Google-recommended practices.
What should you do?

  • A. Create an Instance Template, and allow the Service Account Read Only access for the Compute Engine Access Scope.
  • B. Create a custom role with the permission compute.instances.list and grant the Service Account this role.
  • C. Give the Service Account the role of Compute Viewer, and use the new Service Account for all instances.
  • D. Give the Service Account the role of Project Viewer, and use the new Service Account for all instances.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
MohitA
Highly Voted 3 years, 7 months ago
B, https://cloud.google.com/compute/docs/access/iam
upvoted 16 times
mlyu
3 years, 7 months ago
Although it is not encourage to use custome role, but last sentence in the answer C makes B be the only option
upvoted 7 times
...
AzureDP900
1 year, 5 months ago
B is right
upvoted 2 times
...
...
sudarchary
Highly Voted 2 years, 2 months ago
B. The only option that adheres to the principle of least privilege and meets question requirements is B
upvoted 5 times
...
ArizonaClassics
Most Recent 6 months, 4 weeks ago
B. Create a custom role with the permission compute.instances.list and grant the Service Account this role: This follows the principle of least privilege by granting only the specific permission needed.
upvoted 2 times
...
Brosh
1 year, 3 months ago
I don't get why is it not C, you grant that specific service account the role over all instances, is it wrong because that service account will be able to view not only compute instances?
upvoted 2 times
...
shayke
1 year, 3 months ago
Selected Answer: B
B is the right ans - you only want to list the instances
upvoted 3 times
...
Meyucho
1 year, 3 months ago
Selected Answer: B
With C the SA will list ONLY the instances that are configured to use that SA. The option B will give permissions to list ALL instances.
upvoted 3 times
...
AwesomeGCP
1 year, 6 months ago
Selected Answer: B
B. Create a custom role with the permission compute.instances.list and grant the Service Account this role.
upvoted 3 times
...
nbrnschwgr
1 year, 7 months ago
C. because google recommends pre-defined narrow scope roles over custom roles.
upvoted 2 times
...
Roflcopter
1 year, 8 months ago
Selected Answer: B
Key here is "and grant the Service Account this role.". C and D are giving this role to ALL instances which is overly permissive. A is wrong. Only choice is B
upvoted 5 times
...
cloudprincipal
1 year, 10 months ago
Selected Answer: B
The roles/compute.viewer provides a lot more privileges than just listing compute instances
upvoted 4 times
...
cloudprincipal
1 year, 10 months ago
Selected Answer: C
Compute Viewer Read-only access to get and list Compute Engine resources, without being able to read the data stored on them. https://cloud.google.com/compute/docs/access/iam#compute.viewer
upvoted 2 times
cloudprincipal
1 year, 9 months ago
This is incorrect, as Compute Viewer provides a lot more than what is required
upvoted 1 times
...
...
[Removed]
2 years, 12 months ago
I think C is good
upvoted 4 times
...
DebasishLowes
3 years ago
Ans : B
upvoted 1 times
...
dtmtor
3 years ago
Ans is B
upvoted 1 times
...
[Removed]
3 years, 5 months ago
Ans - B
upvoted 1 times
...
genesis3k
3 years, 5 months ago
Answer is B, based on least privilege principle.
upvoted 1 times
...

Question 72

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 72 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 72
Topic #: 1
[All Professional Cloud Security Engineer Questions]

In a shared security responsibility model for IaaS, which two layers of the stack does the customer share responsibility for? (Choose two.)

  • A. Hardware
  • B. Network Security
  • C. Storage Encryption
  • D. Access Policies
  • E. Boot
Show Suggested Answer Hide Answer
Suggested Answer: BD 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
DebasishLowes
Highly Voted 3 years, 6 months ago
Ans : BD
upvoted 12 times
...
AliHammoud
Most Recent 6 months, 3 weeks ago
B and D
upvoted 1 times
...
GCBC
1 year, 1 month ago
look at diagram, its B D -> https://cloud.google.com/architecture/framework/security/shared-responsibility-shared-fate#shared-diagram
upvoted 4 times
...
GCBC
1 year, 1 month ago
B. Network Security D. Access Policies
upvoted 2 times
...
sushmitha95
1 year, 8 months ago
Selected Answer: BD
D. Access Policies B. Network Security
upvoted 3 times
...
shayke
1 year, 9 months ago
b and D - according to the shared responsibility moder for IAAS
upvoted 2 times
...
AwesomeGCP
2 years ago
Selected Answer: BD
B. Network Security D. Access Policies
upvoted 3 times
...
Random_Mane
2 years ago
Selected Answer: BD
Chart is here https://cloud.google.com/architecture/framework/security/shared-responsibility-shared-fate
upvoted 3 times
...
rr4444
2 years, 9 months ago
Selected Answer: BD
BD https://cloud.google.com/blog/products/containers-kubernetes/exploring-container-security-the-shared-responsibility-model-in-gke-container-security-shared-responsibility-model-gke
upvoted 4 times
...
[Removed]
3 years, 11 months ago
Ans - BD
upvoted 4 times
...
saurabh1805
3 years, 11 months ago
B and D is correct option.
upvoted 4 times
...
passtest100
4 years ago
B and D
upvoted 4 times
...
lordb
4 years ago
B and D
upvoted 4 times
...

Question 73

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 73 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 73
Topic #: 1
[All Professional Cloud Security Engineer Questions]

An organization is starting to move its infrastructure from its on-premises environment to Google Cloud Platform (GCP). The first step the organization wants to take is to migrate its ongoing data backup and disaster recovery solutions to GCP. The organization's on-premises production environment is going to be the next phase for migration to GCP. Stable networking connectivity between the on-premises environment and GCP is also being implemented.
Which GCP solution should the organization use?

  • A. BigQuery using a data pipeline job with continuous updates via Cloud VPN
  • B. Cloud Storage using a scheduled task and gsutil via Cloud Interconnect
  • C. Compute Engines Virtual Machines using Persistent Disk via Cloud Interconnect
  • D. Cloud Datastore using regularly scheduled batch upload jobs via Cloud VPN
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
ownez
Highly Voted 4 years, 1 month ago
Agree B. https://cloud.google.com/solutions/dr-scenarios-for-data#production_environment_is_on-premises
upvoted 11 times
...
madcloud32
Most Recent 7 months ago
Selected Answer: B
Data Backup to GCP, so B is correct
upvoted 1 times
...
Xoxoo
1 year ago
Selected Answer: B
To migrate ongoing data backup and disaster recovery solutions to Google Cloud Platform (GCP), the most suitable GCP solution for the organization would be Cloud Storage using a scheduled task and gsutil via Cloud Interconnect. This solution offers scalability, cost-efficiency, and features essential for backup and disaster recovery solutions. Cloud Storage provides a scalable object storage service that allows you to store and retrieve large amounts of data. By using a scheduled task and gsutil, you can automate the backup process and ensure that your data is securely stored in the cloud. Cloud Interconnect ensures stable networking connectivity between the on-premises environment and GCP, making it an ideal choice for migrating data backup and disaster recovery solutions
upvoted 3 times
...
TNT87
1 year, 6 months ago
https://cloud.google.com/architecture/dr-scenarios-for-data#back-up-to-cloud-storage-using-a-scheduled-task
upvoted 1 times
...
shayke
1 year, 9 months ago
Selected Answer: B
B- backup and DR is GCS
upvoted 2 times
...
rotorclear
1 year, 11 months ago
Selected Answer: B
https://medium.com/@pvergadia/cold-disaster-recovery-on-google-cloud-for-applications-running-on-premises-114b31933d02
upvoted 2 times
AzureDP900
1 year, 11 months ago
B is correct
upvoted 1 times
...
...
AwesomeGCP
2 years ago
Selected Answer: B
B. Cloud Storage using a scheduled task and gsutil via Cloud Interconnect
upvoted 2 times
...
cloudprincipal
2 years, 4 months ago
Selected Answer: B
https://cloud.google.com/solutions/dr-scenarios-for-data#production_environment_is_on-premises
upvoted 1 times
...
rr4444
2 years, 9 months ago
Selected Answer: C
Disaster recover made me think C Compute Engines Virtual Machines using Persistent Disk via Cloud Interconnect Disaster recovery with remote backup alone, when all prod is on premise, will take too long to be viable. The VMs don't need to be running when no disaster
upvoted 3 times
desertlotus1211
1 year, 1 month ago
You never move compute first...
upvoted 1 times
...
csrazdan
1 year, 10 months ago
You would have been correct if the question had any RTO/RPO specifications. In absence of this question is assuming backup and restore as a DR strategy. So Option B Cloud Storage is the correct answer.
upvoted 1 times
...
...
DebasishLowes
3 years, 6 months ago
Ans : B
upvoted 2 times
...
[Removed]
3 years, 11 months ago
Ans - V
upvoted 1 times
[Removed]
3 years, 11 months ago
Typo - it's B
upvoted 2 times
...
...

Question 74

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 74 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 74
Topic #: 1
[All Professional Cloud Security Engineer Questions]

What are the steps to encrypt data using envelope encryption?
A.
✑ Generate a data encryption key (DEK) locally.
✑ Use a key encryption key (KEK) to wrap the DEK.
✑ Encrypt data with the KEK.
✑ Store the encrypted data and the wrapped KEK.
B.
✑ Generate a key encryption key (KEK) locally.
✑ Use the KEK to generate a data encryption key (DEK).
✑ Encrypt data with the DEK.
✑ Store the encrypted data and the wrapped DEK.
C.
✑ Generate a data encryption key (DEK) locally.
✑ Encrypt data with the DEK.
✑ Use a key encryption key (KEK) to wrap the DEK.
✑ Store the encrypted data and the wrapped DEK.
D.
✑ Generate a key encryption key (KEK) locally.
✑ Generate a data encryption key (DEK) locally.
✑ Encrypt data with the KEK.
Store the encrypted data and the wrapped DEK.

Show Suggested Answer Hide Answer
Suggested Answer: C

Reference:
https://cloud.google.com/kms/docs/envelope-encryption

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Tabayashi
Highly Voted 2 years, 11 months ago
Answer is (C). The process of encrypting data is to generate a DEK locally, encrypt data with the DEK, use a KEK to wrap the DEK, and then store the encrypted data and the wrapped DEK. The KEK never leaves Cloud KMS. https://cloud.google.com/kms/docs/envelope-encryption#how_to_encrypt_data_using_envelope_encryption
upvoted 19 times
AzureDP900
2 years, 5 months ago
C is right
upvoted 3 times
...
...
Mr_MIXER007
Most Recent 7 months, 2 weeks ago
Answer is (C).
upvoted 1 times
...
desertlotus1211
1 year, 7 months ago
Answer is C; https://cloud.google.com/kms/docs/envelope-encryption#:~:text=decrypt%20data%20directly.-,How%20to%20encrypt%20data%20using%20envelope%20encryption,data%20and%20the%20wrapped%20DEK.
upvoted 3 times
...
Appsec977
1 year, 10 months ago
C is the correct solution because KEK is never generated on the client's side, KEK is stored in GCP.
upvoted 4 times
...
AwesomeGCP
2 years, 6 months ago
Answer - C is correct. https://cloud.google.com/kms/docs/envelope-encryption#how_to_encrypt_data_using_envelope_encryption
upvoted 3 times
...
[Removed]
2 years, 7 months ago
C it is
upvoted 3 times
...

Question 75

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 75 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 75
Topic #: 1
[All Professional Cloud Security Engineer Questions]

A customer wants to make it convenient for their mobile workforce to access a CRM web interface that is hosted on Google Cloud Platform (GCP). The CRM can only be accessed by someone on the corporate network. The customer wants to make it available over the internet. Your team requires an authentication layer in front of the application that supports two-factor authentication
Which GCP product should the customer implement to meet these requirements?

  • A. Cloud Identity-Aware Proxy
  • B. Cloud Armor
  • C. Cloud Endpoints
  • D. Cloud VPN
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
asee
Highly Voted 4 years, 1 month ago
My answer is going for A. Cloud IAP is integrated with Google Sign-in which Multi-factor authentication can be enabled. https://cloud.google.com/iap/docs/concepts-overview
upvoted 20 times
AzureDP900
2 years, 5 months ago
I agree and A is right
upvoted 2 times
...
...
MohitA
Highly Voted 4 years, 7 months ago
A is the Answer
upvoted 7 times
...
AgoodDay
Most Recent 8 months ago
Selected Answer: A
Technically CloudVPN implementation means the app will not be available from Internet. So answer shall be A.
upvoted 1 times
...
madcloud32
1 year, 1 month ago
Selected Answer: A
Answer is A. IAP, NAT and bastion host can be accessed from internet
upvoted 1 times
...
[Removed]
1 year, 3 months ago
Selected Answer: A
A… def IAP for this use case
upvoted 2 times
...
mahi9
2 years, 1 month ago
Selected Answer: A
the most viable one is A
upvoted 3 times
...
sushmitha95
2 years, 2 months ago
A. Cloud Identity-Aware Proxy
upvoted 2 times
...
Brosh
2 years, 3 months ago
why isn't D right? it adds another layer of auth, it supports MFA and its a logical way to give access to resources to a remote user
upvoted 3 times
...
AwesomeGCP
2 years, 6 months ago
Selected Answer: A
A. Cloud Identity-Aware Proxy I think it’s A. The question asks for an authentication layer.
upvoted 3 times
...
danielklein09
3 years, 2 months ago
Selected Answer: A
A is the correct answer
upvoted 3 times
...
[Removed]
4 years, 5 months ago
Ans - A
upvoted 4 times
...
passtest100
4 years, 6 months ago
SHOULD BE A
upvoted 5 times
...
Raushanr
4 years, 6 months ago
Answer -A
upvoted 4 times
...

Question 76

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 76 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 76
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your company is storing sensitive data in Cloud Storage. You want a key generated on-premises to be used in the encryption process.
What should you do?

  • A. Use the Cloud Key Management Service to manage a data encryption key (DEK).
  • B. Use the Cloud Key Management Service to manage a key encryption key (KEK).
  • C. Use customer-supplied encryption keys to manage the data encryption key (DEK).
  • D. Use customer-supplied encryption keys to manage the key encryption key (KEK).
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
HateMicrosoft
Highly Voted 4 years ago
The anwser is:C This is a Customer-supplied encryption keys (CSEK). We generate our own encryption key and manage it on-premises. A KEK never leaves Cloud KMS.There is no KEK or KMS on-premises. Encryption at rest by default, with various key management options https://cloud.google.com/security/encryption-at-rest
upvoted 32 times
...
sudarchary
Highly Voted 3 years, 2 months ago
Selected Answer: D
Reference Links: https://cloud.google.com/kms/docs/envelope-encryption https://cloud.google.com/security/encryption-at-rest/customer-supplied-encryption-keys
upvoted 9 times
...
brpjp
Most Recent 6 months, 3 weeks ago
Correct Answer -D - CSEK provided by the customer, Key encryption key (KEK) for chunk keys. Wraps the chunk keys. As per https://cloud.google.com/docs/security/encryption/customer-supplied-encryption-keys#cloud_storage. Some of us have provided correct link but not interpreted correctly and selected answer C, which is not correct. A & B not correct because it is CSEK.
upvoted 2 times
...
Mr_MIXER007
7 months, 2 weeks ago
Selected Answer: C
The anwser is:C
upvoted 1 times
...
3d9563b
8 months, 3 weeks ago
Selected Answer: C
By using customer-supplied encryption keys (CSEK) to manage the data encryption key (DEK), you can ensure that the encryption process utilizes a key that was generated and controlled on-premises, meeting your security and compliance requirements.
upvoted 1 times
...
salamKvelas
11 months ago
`customer-supplied encryption keys` == `DEK`, so the only answer that makes sense is A use KMS for KEK to wrap the DEK
upvoted 1 times
...
shanwford
11 months, 1 week ago
Selected Answer: C
Can't be A/B because "key generated on-premises" requirement. KEK ist KMS specific. Why (C): https://cloud.google.com/docs/security/encryption/customer-supplied-encryption-keys#cloud_storage --> "The raw CSEK is used to unwrap wrapped chunk keys, to create raw chunk keys in memory. These are used to decrypt data chunks stored in the storage systems. These keys are used as the data encryption keys (DEK) in Google Cloud Storage for your data."
upvoted 1 times
...
madcloud32
1 year, 1 month ago
Selected Answer: C
C is answer. DEK
upvoted 1 times
...
mjcts
1 year, 2 months ago
Selected Answer: C
Customer-supplied because it is generated on prem. And we can only talk about DEK. KEK is always managed by Google
upvoted 1 times
...
rsamant
1 year, 4 months ago
D , CSEK is used for KEK , DEK is always generated by Google as different chunks use different DEK Raw CSEK Storage system memory Provided by the customer. Key encryption key (KEK) for chunk keys. Wraps the chunk keys. Customer-requested operation (e.g., insertObject or getObject) is complete https://cloud.google.com/docs/security/encryption/customer-supplied-encryption-keys
upvoted 3 times
...
rottzy
1 year, 6 months ago
C, KEK is google managed
upvoted 1 times
...
Xoxoo
1 year, 6 months ago
Selected Answer: C
To use a key generated on-premises for encrypting data in Cloud Storage, you should: C. Use customer-supplied encryption keys to manage the data encryption key (DEK). With customer-supplied encryption keys (CSEK), you can provide your own encryption keys, generated and managed on-premises, to encrypt and decrypt data in Cloud Storage. The data encryption key (DEK) is the key used to encrypt the actual data, and by using CSEK, you can manage this key with your own on-premises key management system.
upvoted 1 times
Xoxoo
1 year, 6 months ago
Options A and B involve using Google Cloud's Key Management Service (KMS), which generates and manages encryption keys within Google Cloud, not on-premises. Option D is not a common practice and is not directly supported for encrypting data in Cloud Storage.
upvoted 2 times
...
...
ananta93
1 year, 7 months ago
Selected Answer: C
The Answer is C. The raw CSEK is used to unwrap wrapped chunk keys, to create raw chunk keys in memory. These are used to decrypt data chunks stored in the storage systems. These keys are used as the data encryption keys (DEK) in Google Cloud Storage for your data. https://cloud.google.com/docs/security/encryption/customer-supplied-encryption-keys#cloud_storage
upvoted 2 times
...
desertlotus1211
1 year, 7 months ago
Answer is C: https://cloud.google.com/docs/security/encryption/customer-supplied-encryption-keys#cloud_storage If you look at the ENTIRE process - it CSEK is used to create the DEK (final product) for decryption if its data...
upvoted 3 times
...
RuchiMishra
1 year, 7 months ago
Selected Answer: D
https://cloud.google.com/docs/security/encryption/customer-supplied-encryption-keys#cloud_storage
upvoted 2 times
...
civilizador
1 year, 8 months ago
C . The answer is C and I don't understand why some people here rewriting google official doc here and saying answer is D?? Here is the link please read it carefully this is not an Instagramm feed. Please when you reading 3 seconds and come here you start confusing many people . Here is link SPECIFICALLY FOR CLOUD STORAGE . https://cloud.google.com/docs/security/encryption/customer-supplied-encryption-keys#cloud_storage
upvoted 3 times
MaryKey
1 year, 7 months ago
I'm confused here - the article on Google says literally: "Raw CSEK - Provided by the customer. Key encryption key (KEK) for chunk keys. Wraps the chunk keys". In other words - KEK, not DEK
upvoted 3 times
...
...
[Removed]
1 year, 8 months ago
Selected Answer: C
"C" KEK never leaves Cloud KMS. Customer supplied key can only be for DEK.
upvoted 3 times
...

Question 77

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 77 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 77
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Last week, a company deployed a new App Engine application that writes logs to BigQuery. No other workloads are running in the project. You need to validate that all data written to BigQuery was done using the App Engine Default Service Account.
What should you do?

  • A. 1. Use Cloud Logging and filter on BigQuery Insert Jobs. 2. Click on the email address in line with the App Engine Default Service Account in the authentication field. 3. Click Hide Matching Entries. 4. Make sure the resulting list is empty.
  • B. 1. Use Cloud Logging and filter on BigQuery Insert Jobs. 2. Click on the email address in line with the App Engine Default Service Account in the authentication field. 3. Click Show Matching Entries. 4. Make sure the resulting list is empty.
  • C. 1. In BigQuery, select the related dataset. 2. Make sure that the App Engine Default Service Account is the only account that can write to the dataset.
  • D. 1. Go to the Identity and Access Management (IAM) section of the project. 2. Validate that the App Engine Default Service Account is the only account that has a role that can write to BigQuery.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
AwesomeGCP
Highly Voted 2 years ago
Selected Answer: A
A. 1. Use StackDriver Logging and filter on BigQuery Insert Jobs. 2. Click on the email address in line with the App Engine Default Service Account in the authentication field. 3. Click Hide Matching Entries. 4. Make sure the resulting list is empty.
upvoted 13 times
Appsec977
1 year, 4 months ago
Stackdriver is now Cloud Operations.
upvoted 2 times
...
...
blacortik
Highly Voted 1 year, 1 month ago
Selected Answer: B
A: This option seems to be about using Cloud Logging and hiding matching entries. However, hiding matching entries wouldn't help in verifying the specific service account used for BigQuery Insert Jobs. C: While restricting permissions in BigQuery is important for security, it doesn't directly help you validate the specific service account that wrote the data. D: While IAM roles and permissions are important to manage access, it doesn't provide a clear process for verifying the service account used for a specific action. In summary, option B provides the appropriate steps to validate that data written to BigQuery was done using the App Engine Default Service Account by examining the Cloud Logging entries.
upvoted 5 times
anciaosinclinado
4 weeks, 1 day ago
Yes, but *hiding* log entries associated with App Engine Default Service Account will help *validate* that all data written to BigQuery was written by such service account. If we show only entries associated to this service account we wouldn't achieve the question objective. So A is correct.
upvoted 1 times
...
...
dija123
Most Recent 6 months, 3 weeks ago
Selected Answer: B
Agree with B
upvoted 1 times
dija123
6 months, 1 week ago
I think "Make sure the resulting list is empty" makes answer A is correct not B
upvoted 4 times
...
...
PST21
1 year, 9 months ago
A is correct as last 2 are means of doing it rather than validating it
upvoted 2 times
...
shayke
1 year, 12 months ago
Selected Answer: C
validate - C
upvoted 1 times
...
tangac
2 years, 1 month ago
Selected Answer: A
https://www.examtopics.com/discussions/google/view/32259-exam-professional-cloud-security-engineer-topic-1-question/
upvoted 4 times
...

Question 78

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 78 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 78
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your team wants to limit users with administrative privileges at the organization level.
Which two roles should your team restrict? (Choose two.)

  • A. Organization Administrator
  • B. Super Admin
  • C. GKE Cluster Admin
  • D. Compute Admin
  • E. Organization Role Viewer
Show Suggested Answer Hide Answer
Suggested Answer: AB 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
HateMicrosoft
Highly Voted 3 years, 7 months ago
The correct anwser is : A&B -resourcemanager.organizationAdmin -Cloud Identity super admin(Old G-Suite Google Workspace)
upvoted 14 times
...
[Removed]
Most Recent 9 months, 4 weeks ago
Selected Answer: AD
For me the correct answer A & D. In the context of gcp there is no super admin. Super admin is only used in gsuite.
upvoted 2 times
...
AwesomeGCP
2 years ago
Selected Answer: AB
A. Organization Administrator B. Super Admin
upvoted 4 times
AzureDP900
1 year, 11 months ago
AB is correct
upvoted 1 times
...
...
Bingo21
3 years, 7 months ago
It says "limit users with administrative privileges" - D doesnt give you admin privileges. AB is the closest to what the question is looking for.
upvoted 3 times
...
[Removed]
3 years, 11 months ago
Ans - AB
upvoted 3 times
...
MohitA
4 years, 1 month ago
AB are the one
upvoted 4 times
singhjoga
3 years, 9 months ago
There is no such role as "Super Admin". There is a Super Admin user. which has the "Owner" role to the how Organisation. Answer is probably A and D.
upvoted 8 times
...
...

Question 79

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 79 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 79
Topic #: 1
[All Professional Cloud Security Engineer Questions]

An organization's security and risk management teams are concerned about where their responsibility lies for certain production workloads they are running in
Google Cloud and where Google's responsibility lies. They are mostly running workloads using Google Cloud's platform-as-a-Service (PaaS) offerings, including
App Engine primarily.
Which area in the technology stack should they focus on as their primary responsibility when using App Engine?

  • A. Configuring and monitoring VPC Flow Logs
  • B. Defending against XSS and SQLi attacks
  • C. Managing the latest updates and security patches for the Guest OS
  • D. Encrypting all stored data
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Random_Mane
Highly Voted 2 years, 6 months ago
Selected Answer: B
B. in PaaS the customer is responsible for web app security, deployment, usage, access policy, and content. https://cloud.google.com/architecture/framework/security/shared-responsibility-shared-fate
upvoted 7 times
...
BPzen
Most Recent 4 months, 1 week ago
Selected Answer: B
Why B. Defending against XSS and SQLi attacks is Correct: Application-Layer Security: When using PaaS offerings, developers are responsible for writing secure application code. This includes preventing application vulnerabilities like XSS, SQL injection, and insecure input validation.
upvoted 1 times
...
madcloud32
1 year, 1 month ago
Selected Answer: B
B is correct. Defense of App Engine and Application Security.
upvoted 1 times
...
gcpengineer
1 year, 11 months ago
Selected Answer: B
B is the ans.
upvoted 2 times
...
AzureDP900
2 years, 5 months ago
B is correct
upvoted 2 times
...
AwesomeGCP
2 years, 6 months ago
Selected Answer: B
B. Defending against XSS and SQLi attacks Data at rest is encrypted by default by Google. So D is wrong. Should be B.
upvoted 4 times
...
koko2314
2 years, 6 months ago
Answer should be D. For SAAS solutions web based attacks are managed by Google. We just need to take care of the data as per the link below.
upvoted 1 times
desertlotus1211
1 year, 7 months ago
read the question again... it's not D
upvoted 1 times
...
...
GHOST1985
2 years, 6 months ago
Selected Answer: D
Answer is D
upvoted 1 times
GHOST1985
2 years, 6 months ago
In PaaS, we're responsible for more controls than in IaaS, including network controls. You share responsibility with us for application-level controls and IAM management. You remain responsible for your data security and client protection. https://cloud.google.com/architecture/framework/security/shared-responsibility-shared-fate#defined_by_workloads
upvoted 2 times
gcpengineer
1 year, 11 months ago
IaaS need more controls thn PaaS
upvoted 1 times
...
tifo16
2 years, 3 months ago
Data at rest is encrypted by default by Google. So D is wrong. As mentioned by your link it Should be B.
upvoted 1 times
...
...
...
[Removed]
2 years, 7 months ago
Selected Answer: B
B it is.
upvoted 3 times
...

Question 80

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 80 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 80
Topic #: 1
[All Professional Cloud Security Engineer Questions]

An engineering team is launching a web application that will be public on the internet. The web application is hosted in multiple GCP regions and will be directed to the respective backend based on the URL request.
Your team wants to avoid exposing the application directly on the internet and wants to deny traffic from a specific list of malicious IP addresses.
Which solution should your team implement to meet these requirements?

  • A. Cloud Armor
  • B. Network Load Balancing
  • C. SSL Proxy Load Balancing
  • D. NAT Gateway
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
DebasishLowes
Highly Voted 2 years, 6 months ago
Ans : A
upvoted 8 times
BillBaits
1 year, 11 months ago
Think so
upvoted 1 times
...
...
Appsec977
Most Recent 4 months, 3 weeks ago
Selected Answer: A
We can block the specific IPs in Cloud armor using simple rules or can use advanced rules using Common Expression Language(CEL).
upvoted 4 times
...
shayke
9 months, 3 weeks ago
Selected Answer: A
A Is the only ans because you are asked to limit access by IP and CA is the only option
upvoted 2 times
...
AzureDP900
11 months, 1 week ago
This is straight forward question, A is right
upvoted 1 times
...
AwesomeGCP
1 year ago
Selected Answer: A
A. Cloud Armor
upvoted 2 times
...
cloudprincipal
1 year, 4 months ago
Selected Answer: A
https://cloud.google.com/armor/docs/security-policy-overview#edge-security
upvoted 2 times
...
[Removed]
2 years, 11 months ago
Ans - A
upvoted 4 times
...
mlyu
3 years, 1 month ago
Definitly B
upvoted 2 times
ownez
3 years ago
Should be A? Cloud armor can deny traffic by defining IP addresses list rule and to avoid exposing the application directly on the internet. While Network LB is using Google Cloud firewalls to control or filter access to the backend VMs. Answer is A.
upvoted 5 times
mlyu
2 years, 12 months ago
you are correct. Answer is A The Cloud armor able to directed user traffic to an external HTTP(S) load balancer enters the PoP closest to the user in Premium Tier. https://cloud.google.com/armor/docs/security-policy-overview#edge-security
upvoted 5 times
...
...
...

Question 81

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 81 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 81
Topic #: 1
[All Professional Cloud Security Engineer Questions]

A customer is running an analytics workload on Google Cloud Platform (GCP) where Compute Engine instances are accessing data stored on Cloud Storage.
Your team wants to make sure that this workload will not be able to access, or be accessed from, the internet.
Which two strategies should your team use to meet these requirements? (Choose two.)

  • A. Configure Private Google Access on the Compute Engine subnet
  • B. Avoid assigning public IP addresses to the Compute Engine cluster.
  • C. Make sure that the Compute Engine cluster is running on a separate subnet.
  • D. Turn off IP forwarding on the Compute Engine instances in the cluster.
  • E. Configure a Cloud NAT gateway.
Show Suggested Answer Hide Answer
Suggested Answer: AB 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
MohitA
Highly Voted 3 years, 7 months ago
AB suits well
upvoted 20 times
...
DebasishLowes
Highly Voted 3 years ago
Ans : AB
upvoted 7 times
...
Mauratay
Most Recent 2 months ago
Selected Answer: AE
AE A. Configuring Private Google Access on the Compute Engine subnet: This feature enables instances without public IP addresses to connect to Google APIs and services over internal IP addresses, ensuring that the instances cannot be accessed from the internet. E. Configuring a Cloud NAT gateway: This ensures that instances within the VPC can connect to the internet, but only to specific IP ranges and ports and it also ensures that the instances cannot initiate connection to the internet. By configuring both options, you are providing your Compute Engine instances with a way to access Google services while also being isolated from the internet and that is the best way to ensure that this workload will not be able to access, or be accessed from, the internet.
upvoted 1 times
...
[Removed]
8 months, 2 weeks ago
Selected Answer: AB
A,B Has to be A and B together. A (Private Google Access) has minimal effect on instances with public IP so we also need to avoid assigning public IP to get the desired (internal only) effect. https://cloud.google.com/vpc/docs/private-google-access
upvoted 2 times
...
gcpengineer
10 months, 3 weeks ago
Selected Answer: AB
AB, A to access the cloud storage privately
upvoted 2 times
...
gcpengineer
11 months ago
Selected Answer: BE
BE. no public ip in vm and nat to access the cloud storage
upvoted 1 times
gcpengineer
10 months, 3 weeks ago
AB, A to access the cloud storage privately
upvoted 1 times
...
...
therealsohail
1 year, 2 months ago
AE A. Configuring Private Google Access on the Compute Engine subnet: This feature enables instances without public IP addresses to connect to Google APIs and services over internal IP addresses, ensuring that the instances cannot be accessed from the internet. E. Configuring a Cloud NAT gateway: This ensures that instances within the VPC can connect to the internet, but only to specific IP ranges and ports and it also ensures that the instances cannot initiate connection to the internet. By configuring both options, you are providing your Compute Engine instances with a way to access Google services while also being isolated from the internet and that is the best way to ensure that this workload will not be able to access, or be accessed from, the internet.
upvoted 2 times
diasporabro
1 year, 2 months ago
NAT Gateway allows an instance to access the public internet (while not being accessible from the public internet), so it is incorrect
upvoted 3 times
...
...
AzureDP900
1 year, 5 months ago
AB is correct A. Configure Private Google Access on the Compute Engine subnet B. Avoid assigning public IP addresses to the Compute Engine cluster.
upvoted 1 times
...
AwesomeGCP
1 year, 6 months ago
Selected Answer: AB
A. Configure Private Google Access on the Compute Engine subnet B. Avoid assigning public IP addresses to the Compute Engine cluster.
upvoted 2 times
...
cloudprincipal
1 year, 10 months ago
Selected Answer: AB
agree with all the others
upvoted 2 times
...
pfilourenco
2 years, 11 months ago
B and E: "make sure that this workload will not be able to access, or be accessed from, the internet." If we have cloud NAT we are able to access the internet! Also with public IP.
upvoted 2 times
Rupo7
2 years, 1 month ago
The question says " not be able to access, or be accessed from, the internet." A NAT gateway enables access to the internet, just behind a static IP. A. Private access for the subnet is required to enable access to GCS. B is a good measure, as then the instance cannot access the internet at all (without a NAT Gateway that is).
upvoted 1 times
gcpengineer
11 months ago
private access of storage is required not of the VMs
upvoted 1 times
...
...
...
[Removed]
2 years, 12 months ago
Not A https://cloud.google.com/vpc/docs/private-google-access
upvoted 1 times
tanfromvn
2 years, 9 months ago
A_B, why not A? Private access just accepts traffic in GCP and to GG API
upvoted 2 times
...
[Removed]
2 years, 12 months ago
NOt D, because by de fault IP forwarding is disabled. You do not need to turn it off.
upvoted 1 times
[Removed]
2 years, 12 months ago
So B and E is the right answer.
upvoted 3 times
...
...
...
ffdd1234
3 years, 2 months ago
if you Avoid assigning public IP addresses to the Compute Engine cluster the instance could access to internet if have a nat gateway, maybe the answer is A and D
upvoted 1 times
ffdd1234
2 years, 5 months ago
+1 A-D
upvoted 1 times
ffdd1234
2 years, 5 months ago
But not sure "Ensure that IP Forwarding feature is not enabled at the Google Compute Engine instance level for security and compliance reasons, as instances with IP Forwarding enabled act as routers/packet forwarders." IP FW is for route packets could not be D
upvoted 1 times
...
...
...
Topsy
3 years, 3 months ago
A and B is correct
upvoted 4 times
...
[Removed]
3 years, 5 months ago
Ans - AB
upvoted 2 times
...
genesis3k
3 years, 5 months ago
AB is the correct answer.
upvoted 1 times
...
Wooky
3 years, 6 months ago
B,D not A Private google access provides public google api access without public IP
upvoted 1 times
Wooky
3 years, 6 months ago
My mistake, ans is AB.
upvoted 2 times
...
...

Question 82

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 82 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 82
Topic #: 1
[All Professional Cloud Security Engineer Questions]

A customer wants to run a batch processing system on VMs and store the output files in a Cloud Storage bucket. The networking and security teams have decided that no VMs may reach the public internet.
How should this be accomplished?

  • A. Create a firewall rule to block internet traffic from the VM.
  • B. Provision a NAT Gateway to access the Cloud Storage API endpoint.
  • C. Enable Private Google Access.
  • D. Mount a Cloud Storage bucket as a local filesystem on every VM.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
tanfromvn
Highly Voted 3 years, 3 months ago
C-there is no traffic to outside internet
upvoted 15 times
mynk29
2 years, 7 months ago
Private google access is enabled at subnet level not at VPC level.
upvoted 1 times
...
...
nilopo
Highly Voted 2 years ago
Selected Answer: C
The ask is to store the output files in a Cloud storage bucket. "The networking and security teams have decided that no VMs may reach the public internet" - No VMs MAY reach public internet but not 'MUST'. Hence 'C' is the answer
upvoted 7 times
...
desertlotus1211
Most Recent 8 months ago
What if the VM is on-premise? The question never said it was in GCP? Would the answer not be 'B'?
upvoted 1 times
...
Portugapt
8 months, 3 weeks ago
Selected Answer: C
What should be accomplished is the access to GCS, knowing VMs cannot access the public network. So, Private Google Access accomplishes it.
upvoted 1 times
...
desertlotus1211
9 months, 1 week ago
The answer is A.... With GPA enabled, VMs can still reach the Internet. Accessing the backend storage is ther to throw you off of what is being asked - and that's NO VMs may reach the Internet... Answer is A
upvoted 1 times
...
[Removed]
9 months, 4 weeks ago
Selected Answer: C
C private google access allows access to google services without internet connection
upvoted 2 times
...
Xoxoo
1 year ago
Selected Answer: C
To ensure that VMs can access Cloud Storage without reaching the public internet, you should: C. Enable Private Google Access. Enabling Private Google Access allows VMs with only internal IP addresses in a VPC network to access Google Cloud services like Cloud Storage without needing external IP addresses or going through the public internet.
upvoted 2 times
Xoxoo
1 year ago
Option B, provisioning a NAT Gateway, would enable VMs to access the public internet, which is not in line with the requirement of not allowing VMs to reach the public internet. Options A and D are not suitable for the specific requirement of accessing Cloud Storage while preventing VMs from reaching the public internet.
upvoted 1 times
...
...
blacortik
1 year, 1 month ago
Selected Answer: B
B. Provision a NAT Gateway to access the Cloud Storage API endpoint. Explanation: To ensure that VMs can't reach the public internet but can still access Google Cloud services like Cloud Storage, you can use a Network Address Translation (NAT) Gateway. NAT Gateway allows instances in a private subnet to initiate outbound connections to the internet while masking their actual internal IP addresses. This way, the VMs can access the Cloud Storage API endpoint without directly connecting to the public internet.
upvoted 2 times
...
[Removed]
1 year, 2 months ago
Selected Answer: C
"C" The question is not worded well. If you replace "..has decided.." with "..has enforced.." then the meat of the question becomes how to achieve the first part of the requirement which is reaching cloud storage without public access, which is through private google access. Reference: https://cloud.google.com/vpc/docs/private-google-access
upvoted 3 times
desertlotus1211
1 year, 1 month ago
This has no effect and is meaningless if the VM has an external IP... You need to read the document: 'Private Google Access has no effect on instances that have external IP addresses. Instances with external IP addresses can access the internet, according to the internet access requirements'... No where in the question say the VMs has or hasn't have an ext. IP. Correct Answer is A
upvoted 1 times
...
...
gcpengineer
1 year, 4 months ago
Selected Answer: A
I think A is correct
upvoted 1 times
...
gcpengineer
1 year, 4 months ago
Selected Answer: B
B is the ans, as nat is needed to reach the cloud storage
upvoted 1 times
gcpengineer
1 year, 4 months ago
I think A is correct
upvoted 1 times
...
...
Lyfedge
1 year, 6 months ago
The question says "The networking and security teams have decided that no VMs may reach the public internet"y A
upvoted 1 times
gcpengineer
1 year, 4 months ago
How are u suppose to access cloud storage?
upvoted 1 times
desertlotus1211
9 months, 1 week ago
that not what they asked... they asked 'The networking and security teams have decided that no VMs may reach the public internet'.... so what do you do?
upvoted 1 times
...
...
...
Meyucho
1 year, 9 months ago
C!!!! This example is just the exact and only meaning for have PGA!!!
upvoted 1 times
...
TonytheTiger
1 year, 10 months ago
Answer C: Here is why; the VM need to access google service i.e. "Cloud Storage Bucket". Google doc states: Private Google Access permits access to Google APIs and services in Google's production infrastructure https://cloud.google.com/vpc/docs/private-google-access Everyone is reading the question as limited access to public internet but is missing the 2nd part of the question, which is access a google services. ONLY enable Private Google Access will fulfil the requirement.
upvoted 2 times
...
Littleivy
1 year, 11 months ago
Selected Answer: C
C is the answer
upvoted 1 times
...
rotorclear
1 year, 11 months ago
Selected Answer: C
The ask is to access cloud storage while doing the batch processing not how to block the internet. Overall it’s a poor choice of words in the question attempting to confuse than check knowledge
upvoted 1 times
AzureDP900
1 year, 11 months ago
C is right
upvoted 1 times
...
...
AwesomeGCP
2 years ago
C. Enable Private Google Access on the VPC.
upvoted 1 times
...

Question 83

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 83 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 83
Topic #: 1
[All Professional Cloud Security Engineer Questions]

As adoption of the Cloud Data Loss Prevention (Cloud DLP) API grows within your company, you need to optimize usage to reduce cost. Cloud DLP target data is stored in Cloud Storage and BigQuery. The location and region are identified as a suffix in the resource name.
Which cost reduction options should you recommend?

  • A. Set appropriate rowsLimit value on BigQuery data hosted outside the US and set appropriate bytesLimitPerFile value on multiregional Cloud Storage buckets.
  • B. Set appropriate rowsLimit value on BigQuery data hosted outside the US, and minimize transformation units on multiregional Cloud Storage buckets.
  • C. Use rowsLimit and bytesLimitPerFile to sample data and use CloudStorageRegexFileSet to limit scans.
  • D. Use FindingLimits and TimespanContfig to sample data and minimize transformation units.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
[Removed]
Highly Voted 3 years, 5 months ago
Ans - C https://cloud.google.com/dlp/docs/inspecting-storage#sampling https://cloud.google.com/dlp/docs/best-practices-costs#limit_scans_of_files_in_to_only_relevant_files
upvoted 14 times
[Removed]
3 years, 5 months ago
https://cloud.google.com/dlp/docs/inspecting-storage#limiting-gcs
upvoted 1 times
...
...
passtest100
Highly Voted 3 years, 6 months ago
C is the right one.
upvoted 5 times
...
Xoxoo
Most Recent 6 months, 3 weeks ago
Selected Answer: C
To optimize usage of the Cloud Data Loss Prevention (Cloud DLP) API and reduce cost, you should consider using sampling and CloudStorageRegexFileSet to limit scans 1. By sampling data, you can limit the amount of data that the DLP API scans, thereby reducing costs 1. You can use the rowsLimit and bytesLimitPerFile options to sample data and limit scans to specific files in Cloud Storage 1. You can also use CloudStorageRegexFileSet to limit scans to only specific files in Cloud Storage 1. In addition, you can set appropriate rowsLimit value on BigQuery data hosted outside the US to further optimize usage and reduce costs 1.
upvoted 2 times
...
AzureDP900
1 year, 5 months ago
C is right
upvoted 4 times
...
AwesomeGCP
1 year, 6 months ago
Selected Answer: C
C . Use rowsLimit and bytesLimitPerFile to sample data and use CloudStorageRegexFileSet to limit scans.
upvoted 4 times
...
cloudprincipal
1 year, 10 months ago
Selected Answer: C
https://cloud.google.com/dlp/docs/inspecting-storage#sampling
upvoted 3 times
...

Question 84

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 84 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 84
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your team uses a service account to authenticate data transfers from a given Compute Engine virtual machine instance of to a specified Cloud Storage bucket. An engineer accidentally deletes the service account, which breaks application functionality. You want to recover the application as quickly as possible without compromising security.
What should you do?

  • A. Temporarily disable authentication on the Cloud Storage bucket.
  • B. Use the undelete command to recover the deleted service account.
  • C. Create a new service account with the same name as the deleted service account.
  • D. Update the permissions of another existing service account and supply those credentials to the applications.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
DebasishLowes
Highly Voted 4 years, 1 month ago
Ans : B
upvoted 9 times
...
saurabh1805
Highly Voted 4 years, 5 months ago
B is correct answer here. https://cloud.google.com/iam/docs/reference/rest/v1/projects.serviceAccounts/undelete
upvoted 7 times
AzureDP900
2 years, 5 months ago
Thank you for sharing link, I agree B is right
upvoted 1 times
...
...
Zek
Most Recent 4 months, 1 week ago
Selected Answer: B
Answer is B. After you delete a service account, IAM permanently removes the service account 30 days later. You can usually undelete a deleted service account if it meets these criteria: The service account was deleted less than 30 days ago. https://cloud.google.com/iam/docs/service-accounts-delete-undelete#undeleting Not C because The new service account does not inherit the permissions of the deleted service account. In effect, it is completely separate from the deleted service account
upvoted 1 times
...
pradoUA
1 year, 6 months ago
Selected Answer: B
B is correct
upvoted 2 times
...
ArizonaClassics
1 year, 6 months ago
B. Use the undelete command to recover the deleted service account. Google Cloud Platform provides an undelete command that can be used to recover a recently deleted service account. This would be the fastest and most direct way to restore functionality without compromising security or introducing changes to the application configuration.
upvoted 3 times
...
[Removed]
1 year, 8 months ago
Selected Answer: B
"B" Answer is B however the documentation has been updated. Not all links in other comments are valid still. Here's the latest link around this topic. https://cloud.google.com/iam/docs/service-accounts-delete-undelete#undeleting
upvoted 3 times
...
AwesomeGCP
2 years, 6 months ago
Selected Answer: B
B. Use the undelete command to recover the deleted service account.
upvoted 3 times
...
[Removed]
4 years, 5 months ago
Ans - B
upvoted 3 times
...
MohitA
4 years, 7 months ago
B is the Answer
upvoted 4 times
...

Question 85

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 85 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 85
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are the Security Admin in your company. You want to synchronize all security groups that have an email address from your LDAP directory in Cloud IAM.
What should you do?

  • A. Configure Google Cloud Directory Sync to sync security groups using LDAP search rules that have ג€user email addressג€ as the attribute to facilitate one-way sync.
  • B. Configure Google Cloud Directory Sync to sync security groups using LDAP search rules that have ג€user email addressג€ as the attribute to facilitate bidirectional sync.
  • C. Use a management tool to sync the subset based on the email address attribute. Create a group in the Google domain. A group created in a Google domain will automatically have an explicit Google Cloud Identity and Access Management (IAM) role.
  • D. Use a management tool to sync the subset based on group object class attribute. Create a group in the Google domain. A group created in a Google domain will automatically have an explicit Google Cloud Identity and Access Management (IAM) role.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
sudarchary
Highly Voted 2 years, 2 months ago
Selected Answer: A
search rules that have "user email address" as the attribute to facilitate one-way sync. Reference Links: https://support.google.com/a/answer/6126589?hl=en
upvoted 11 times
...
JoseMaria111
Highly Voted 1 year, 6 months ago
GCDS allow sync ldap users in one way. A is correct
upvoted 5 times
...
GCBC
Most Recent 7 months, 2 weeks ago
A is correct
upvoted 2 times
...
PST21
1 year, 3 months ago
A is correct as it shoud be one way sync - LDAP -> Cloud Identity via GCDS
upvoted 2 times
...
AzureDP900
1 year, 5 months ago
A is correct
upvoted 3 times
...
AwesomeGCP
1 year, 6 months ago
Selected Answer: A
A. Configure Google Cloud Directory Sync to sync security groups using LDAP search rules that have “user email address” as the attribute to facilitate one-way sync.
upvoted 2 times
...
[Removed]
2 years, 12 months ago
Why A is not correct? GCP provide this sync tool.
upvoted 3 times
mistryminded
2 years, 4 months ago
Incorrect. GCDS is Google Workspace Admin tool. Correct answer is A. GCDS only syncs one way - https://support.google.com/a/answer/106368?hl=en
upvoted 4 times
...
...
DebasishLowes
3 years, 1 month ago
Ans : A
upvoted 2 times
...
[Removed]
3 years, 5 months ago
Ans - A
upvoted 2 times
...
saurabh1805
3 years, 5 months ago
A is correct answer here.
upvoted 2 times
...
passtest100
3 years, 6 months ago
Answer - A
upvoted 2 times
...
skshak
3 years, 6 months ago
Answer - A
upvoted 2 times
...

Question 86

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 86 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 86
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are part of a security team investigating a compromised service account key. You need to audit which new resources were created by the service account.
What should you do?

  • A. Query Data Access logs.
  • B. Query Admin Activity logs.
  • C. Query Access Transparency logs.
  • D. Query Stackdriver Monitoring Workspace.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
MohitA
Highly Voted 4 years, 1 month ago
B is the Ans
upvoted 14 times
Fellipo
3 years, 11 months ago
B it's OK
upvoted 4 times
...
ownez
4 years ago
Shouldn't it be A? The question is about which resources were created by the SA. B (Admin Activity logs) cannot view this. It is only for user's activity such as create, modify or delete a particular SA.
upvoted 1 times
FatCharlie
3 years, 10 months ago
"Admin Activity audit logs contain log entries for API calls or other administrative actions that modify the configuration or metadata of resources. For example, these logs record when users create VM instances or change Identity and Access Management permissions". This is exactly what you want to see. What resources were created by the SA? https://cloud.google.com/logging/docs/audit#admin-activity
upvoted 10 times
AzureDP900
1 year, 11 months ago
B is right . Agree with your explanation
upvoted 2 times
...
...
...
...
VicF
Highly Voted 3 years, 5 months ago
Ans B "B" is for actions that modify the configuration or metadata of resources. For example, these logs record when users create VM instances or change Identity and Access Management permissions. "A" is only for "user-provided" resource data. Data Access audit logs-- except for BigQuery Data Access audit logs-- "are disabled by default"
upvoted 6 times
...
dija123
Most Recent 6 months, 1 week ago
Selected Answer: B
Agree with B
upvoted 1 times
...
Xoxoo
1 year ago
Selected Answer: B
To audit which new resources were created by a compromised service account key, you should query Admin Activity logs 1. Admin Activity logs provide a record of every administrative action taken in your Google Cloud Platform (GCP) project, including the creation of new resources 1. By querying Admin Activity logs, you can identify which new resources were created by the compromised service account key and take appropriate action to secure your environment 1. You can use the gcloud command-line tool or the Cloud Console to query Admin Activity logs 1. You can filter the logs based on specific criteria, such as time range, user, or resource type 1.
upvoted 2 times
...
Meyucho
1 year, 9 months ago
Selected Answer: B
B - Audit logs. They have all the API calls that creates, modify or destroy resources. https://cloud.google.com/logging/docs/audit#admin-activity
upvoted 2 times
...
AwesomeGCP
2 years ago
Selected Answer: B
B. Query Admin Activity logs.
upvoted 3 times
...
JoseMaria111
2 years ago
Admin activity log records resources changes. B is correct
upvoted 2 times
...
piyush_1982
2 years, 2 months ago
Selected Answer: B
Admin activity logs are always created to log entries for API calls or other actions that modify the configuration or metadata of resources. For example, these logs record when users create VM instances or change Identity and Access Management permissions.
upvoted 2 times
...
cloudprincipal
2 years, 4 months ago
Selected Answer: B
Admin activity logs contain all GCP API calls. So this is where the service account activity will show up
upvoted 2 times
...
[Removed]
3 years, 6 months ago
I support B, https://cloud.google.com/iam/docs/audit-logging says IAM logs write into admin log
upvoted 4 times
...
DebasishLowes
3 years, 6 months ago
Ans : B
upvoted 3 times
...
[Removed]
3 years, 11 months ago
Ans - B
upvoted 4 times
...

Question 87

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 87 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 87
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You have an application where the frontend is deployed on a managed instance group in subnet A and the data layer is stored on a mysql Compute Engine virtual machine (VM) in subnet B on the same VPC. Subnet A and Subnet B hold several other Compute Engine VMs. You only want to allow the application frontend to access the data in the application's mysql instance on port 3306.
What should you do?

  • A. Configure an ingress firewall rule that allows communication from the src IP range of subnet A to the tag "data-tag" that is applied to the mysql Compute Engine VM on port 3306.
  • B. Configure an ingress firewall rule that allows communication from the frontend's unique service account to the unique service account of the mysql Compute Engine VM on port 3306.
  • C. Configure a network tag "fe-tag" to be applied to all instances in subnet A and a network tag "data-tag" to be applied to all instances in subnet B. Then configure an egress firewall rule that allows communication from Compute Engine VMs tagged with data-tag to destination Compute Engine VMs tagged fe- tag.
  • D. Configure a network tag "fe-tag" to be applied to all instances in subnet A and a network tag "data-tag" to be applied to all instances in subnet B. Then configure an ingress firewall rule that allows communication from Compute Engine VMs tagged with fe-tag to destination Compute Engine VMs tagged with data-tag.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Zuy01
Highly Voted 3 years, 1 month ago
B for sure, u can check this : https://cloud.google.com/sql/docs/mysql/sql-proxy#using-a-service-account
upvoted 11 times
...
dija123
Most Recent 6 months, 3 weeks ago
Selected Answer: B
Agree with B
upvoted 1 times
...
Xoxoo
1 year ago
Selected Answer: B
This approach ensures that only the application frontend can access the data in the MySQL instance, while all other Compute Engine VMs in subnet A and subnet B are restricted from accessing it . By configuring an ingress firewall rule that allows communication between the frontend’s unique service account and the unique service account of the MySQL Compute Engine VM, you can ensure that only authorized users can access your MySQL instance .
upvoted 2 times
...
GCBC
1 year, 1 month ago
B Firellas rules using service account is better than tag
upvoted 2 times
...
[Removed]
1 year, 2 months ago
Selected Answer: B
"B" I believe the answer is between B and A since part of the requirement is specifying the port. B is more correct since it leverages service accounts which is best practice for authentication/communication between application and database. Also, answer "A" allows ALL instances in the subnet to reach to reach mysql which is not desired. They only want the specific Frontend instances to reach excluding other instances in the subnet. https://cloud.google.com/firewall/docs/firewalls#best_practices_for_firewall_rules
upvoted 3 times
...
AwesomeGCP
2 years ago
Selected Answer: B
B. Configure an ingress firewall rule that allows communication from the frontend’s unique service account to the unique service account of the mysql ComputeEngine VM on port 3306.
upvoted 3 times
...
JoseMaria111
2 years ago
B is correct.firellas rules using service account is better than tag based. https://cloud.google.com/vpc/docs/firewalls#best_practices_for_firewall_rules
upvoted 2 times
...
mT3
2 years, 4 months ago
Selected Answer: B
Ans : B
upvoted 4 times
...
major_querty
2 years, 10 months ago
why is it not a? a seems straight forward The link which Zuy01 provided for answer b states: For this reason, using a service account is the recommended method for production instances NOT running on a Compute Engine instance.
upvoted 4 times
Loved
1 year, 11 months ago
But answer A says "communication from the src IP range of subnet A"... this rules include all the instances on subnet A, while you have to consider only the frontend
upvoted 1 times
...
Arturo_Cloud
2 years, 1 month ago
I agree (A), it is planned to limit a MySQL server in Compute Engine (IaaS) not in Cloud SQL (PaaS), so Networks Tags is the most common and recommended to use. Don't get confused with the services....
upvoted 2 times
...
...
DebasishLowes
3 years, 6 months ago
Ans : B
upvoted 2 times
...
dtmtor
3 years, 6 months ago
ans is B
upvoted 2 times
...
[Removed]
3 years, 11 months ago
Ans - B
upvoted 4 times
...
Rantu
4 years ago
B is correct
upvoted 4 times
...

Question 88

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 88 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 88
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your company operates an application instance group that is currently deployed behind a Google Cloud load balancer in us-central-1 and is configured to use the
Standard Tier network. The infrastructure team wants to expand to a second Google Cloud region, us-east-2. You need to set up a single external IP address to distribute new requests to the instance groups in both regions.
What should you do?

  • A. Change the load balancer backend configuration to use network endpoint groups instead of instance groups.
  • B. Change the load balancer frontend configuration to use the Premium Tier network, and add the new instance group.
  • C. Create a new load balancer in us-east-2 using the Standard Tier network, and assign a static external IP address.
  • D. Create a Cloud VPN connection between the two regions, and enable Google Private Access.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Fellipo
Highly Voted 3 years, 5 months ago
In Premium Tier: Backends can be in any region and any VPC network. In Standard Tier: Backends must be in the same region as the forwarding rule, but can be in any VPC network.
upvoted 14 times
AzureDP900
1 year, 5 months ago
B is right
upvoted 2 times
...
...
mlyu
Highly Voted 3 years, 7 months ago
Should be B In Standard Tier LB, Backends must be in the same region https://cloud.google.com/load-balancing/docs/load-balancing-overview#backend_region_and_network
upvoted 8 times
...
hakunamatataa
Most Recent 6 months, 3 weeks ago
Selected Answer: B
B is the correct answer.
upvoted 2 times
...
Xoxoo
6 months, 3 weeks ago
Selected Answer: B
To set up a single external IP address to distribute new requests to the instance groups in both regions, you should change the load balancer frontend configuration to use the Premium Tier network, and add the new instance group . By changing the load balancer frontend configuration to use the Premium Tier network, you can create a global load balancer that can distribute traffic across multiple regions using a single IP address . You can then add the new instance group to the existing load balancer to ensure that new requests are distributed to both regions . This approach provides a scalable and cost-effective solution for distributing traffic across multiple regions while ensuring high availability and low latency .
upvoted 3 times
...
[Removed]
8 months, 3 weeks ago
Selected Answer: B
"B" Answer is "B". Premium Network Tier allows you to span multiple regions. https://cloud.google.com/network-tiers
upvoted 4 times
...
spoxman
1 year ago
Selected Answer: B
only Premium allows LB between regions
upvoted 1 times
...
Meyucho
1 year, 3 months ago
Selected Answer: B
Global load balancers require Premium Tier!
upvoted 1 times
...
AwesomeGCP
1 year, 6 months ago
Selected Answer: B
B. Change the load balancer frontend configuration to use the Premium Tier network, and add the new instance group.
upvoted 1 times
...
cloudprincipal
1 year, 10 months ago
Selected Answer: B
https://cloud.google.com/load-balancing/docs/choosing-load-balancer#global-regional
upvoted 1 times
...
DebasishLowes
3 years ago
Ans : B
upvoted 2 times
...
saurabh1805
3 years, 5 months ago
I will also go with Option B
upvoted 6 times
...

Question 89

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 89 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 89
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are the security admin of your company. You have 3,000 objects in your Cloud Storage bucket. You do not want to manage access to each object individually.
You also do not want the uploader of an object to always have full control of the object. However, you want to use Cloud Audit Logs to manage access to your bucket.
What should you do?

  • A. Set up an ACL with OWNER permission to a scope of allUsers.
  • B. Set up an ACL with READER permission to a scope of allUsers.
  • C. Set up a default bucket ACL and manage access for users using IAM.
  • D. Set up Uniform bucket-level access on the Cloud Storage bucket and manage access for users using IAM.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Fellipo
Highly Voted 4 years, 5 months ago
it's D, https://cloud.google.com/storage/docs/uniform-bucket-level-access#:~:text=When%20you%20enable%20uniform%20bucket,and%20the%20objects%20it%20contains.
upvoted 19 times
...
Xoxoo
Highly Voted 1 year, 6 months ago
Selected Answer: D
To manage access to your Cloud Storage bucket without having to manage access to each object individually, you should set up Uniform bucket-level access on the Cloud Storage bucket and manage access for users using IAM . Uniform bucket-level access allows you to use Identity and Access Management (IAM) alone to manage permissions for all objects contained inside the bucket or groups of objects with common name prefixes . This approach simplifies access management and ensures that all objects in the bucket have the same level of access . By using IAM, you can grant users specific permissions to access your Cloud Storage bucket, such as read, write, or delete permissions . You can also use Cloud Audit Logs to monitor and manage access to your bucket . This approach provides a secure environment for your Cloud Storage bucket while ensuring that only authorized users can access it .
upvoted 5 times
...
Zek
Most Recent 4 months, 1 week ago
Selected Answer: D
Answer is D https://cloud.google.com/storage/docs/uniform-bucket-level-access#overview
upvoted 1 times
Zek
4 months, 1 week ago
Not A, B or C because "ACLs are used only by Cloud Storage and have limited permission options, but they allow you to grant permissions on a per-object basis"
upvoted 1 times
...
...
BPzen
4 months, 1 week ago
Selected Answer: D
Explanation: When you want to avoid managing access to individual objects in a Google Cloud Storage bucket, Uniform bucket-level access simplifies access control by enforcing consistent permissions at the bucket level. It disables per-object ACLs and enables centralized access management using IAM roles and permissions.
upvoted 1 times
...
tia_gll
1 year ago
Selected Answer: D
ans is D
upvoted 1 times
...
nccdebug
1 year, 1 month ago
Ans: D. https://cloud.google.com/storage/docs/uniform-bucket-level-access
upvoted 1 times
...
AzureDP900
2 years, 5 months ago
D is right
upvoted 3 times
...
AwesomeGCP
2 years, 6 months ago
Selected Answer: D
D. Set up Uniform bucket-level access on the Cloud Storage bucket and manage access for users using IAM.
upvoted 5 times
...
cloudprincipal
2 years, 10 months ago
Selected Answer: D
https://cloud.google.com/storage/docs/uniform-bucket-level-access#enabled
upvoted 3 times
...
ramravella
3 years, 9 months ago
Answer is A. Read the note below in the below URL https://cloud.google.com/storage/docs/access-control/lists Note: You cannot grant discrete permissions for reading or writing ACLs or other metadata. To allow someone to read and write ACLs, you must grant them OWNER permission.
upvoted 1 times
Zuy01
3 years, 8 months ago
the question mention "do not want the uploader of an object to always have full control of the object" that's mean you shouldn't grant the owner permission, hence the best ans is D.
upvoted 3 times
...
...
[Removed]
3 years, 12 months ago
A grants Owner???too much for this.
upvoted 2 times
...
[Removed]
4 years, 5 months ago
Ans - D
upvoted 3 times
...
saurabh1805
4 years, 5 months ago
I will go with uniform level access and manage access via IAM, Hence D.
upvoted 2 times
...
passtest100
4 years, 6 months ago
SHOULD BE D
upvoted 2 times
...
skshak
4 years, 6 months ago
Answer C https://cloud.google.com/storage/docs/access-control Uniform (recommended): Uniform bucket-level access allows you to use Identity and Access Management (IAM) alone to manage permissions. IAM applies permissions to all the objects contained inside the bucket or groups of objects with common name prefixes. IAM also allows you to use features that are not available when working with ACLs, such as IAM Conditions and Cloud Audit Logs.
upvoted 1 times
skshak
4 years, 6 months ago
Sorry, It is D. It was typo.
upvoted 3 times
mlyu
4 years, 6 months ago
the question stated they need cloud audit log for the GCS access, however uniform bucket-level access has restriction on the cloud audit log. See https://cloud.google.com/storage/docs/uniform-bucket-level-access The following restrictions apply when using uniform bucket-level access: Cloud Logging and Cloud Audit Logs cannot export to buckets that have uniform bucket-level access enabled.
upvoted 1 times
FatCharlie
4 years, 4 months ago
They're not saying they want to export the logs to the bucket. They're just saying they want to "use Cloud Audit Logs to manage access to your bucket" (whatever that means).
upvoted 1 times
...
...
...
...

Question 90

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 90 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 90
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are the security admin of your company. Your development team creates multiple GCP projects under the "implementation" folder for several dev, staging, and production workloads. You want to prevent data exfiltration by malicious insiders or compromised code by setting up a security perimeter. However, you do not want to restrict communication between the projects.
What should you do?

  • A. Use a Shared VPC to enable communication between all projects, and use firewall rules to prevent data exfiltration.
  • B. Create access levels in Access Context Manager to prevent data exfiltration, and use a shared VPC for communication between projects.
  • C. Use an infrastructure-as-code software tool to set up a single service perimeter and to deploy a Cloud Function that monitors the "implementation" folder via Stackdriver and Cloud Pub/Sub. When the function notices that a new project is added to the folder, it executes Terraform to add the new project to the associated perimeter.
  • D. Use an infrastructure-as-code software tool to set up three different service perimeters for dev, staging, and prod and to deploy a Cloud Function that monitors the "implementation" folder via Stackdriver and Cloud Pub/Sub. When the function notices that a new project is added to the folder, it executes Terraform to add the new project to the respective perimeter.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
jonclem
Highly Voted 4 years, 4 months ago
I'd also go with option B and here's why: https://cloud.google.com/access-context-manager/docs/overview Option A was a consideration until I came across this: https://cloud.google.com/security/data-loss-prevention/preventing-data-exfiltration
upvoted 17 times
...
dzhu
Highly Voted 3 years, 7 months ago
I think this is C. Communication between the project is necessary tied to VPC, but you need to include all projects under implementation folder in a single VPCSC
upvoted 11 times
...
YourFriendlyNeighborhoodSpider
Most Recent 3 weeks, 6 days ago
Selected Answer: B
B. Create access levels in Access Context Manager to prevent data exfiltration, and use a shared VPC for communication between projects. Explanation: Access Context Manager allows you to define access levels based on various attributes, such as the user's identity and the context of their request, which can help limit actions that could be used for data exfiltration. This setup allows you to enforce security policies around sensitive data while still allowing communication through a Shared VPC. Shared VPC enables networking between different projects, ensuring that resources can communicate securely without exposing them to the public internet or compromising security policies.
upvoted 1 times
...
BPzen
4 months, 1 week ago
Selected Answer: C
Explanation: To prevent data exfiltration while allowing communication between projects, a single service perimeter is the best approach. This creates a secure boundary around all projects under the "implementation" folder, ensuring that resources within the perimeter can communicate while preventing unauthorized access or data transfer outside the perimeter. Automating the addition of new projects to the service perimeter ensures scalability and compliance with organizational security requirements.
upvoted 1 times
...
Bettoxicity
1 year ago
Selected Answer: D
Similitudes con la opción C: Uso de IaC y Cloud Function: La opción D también utiliza una herramienta de IaC (Terraform) y una Cloud Function para automatizar la creación y gestión de los service perimeters. Monitoreo con Stackdriver y Cloud Pub/Sub: Se utiliza Stackdriver y Cloud Pub/Sub para detectar la creación de nuevos proyectos. Diferencias con la opción C: Cantidad de service perimeters: La opción D crea tres service perimeters diferentes (dev, staging, prod), mientras que la opción C solo crea uno. Asignación automática de proyectos: La función Cloud de la opción D asigna automáticamente los nuevos proyectos al perímetro de servicio correspondiente. En la opción C, la asignación de proyectos a los service perimeters se debe realizar manualmente.
upvoted 1 times
...
Sukon_Desknot
1 year, 2 months ago
Selected Answer: D
Using Access Context Manager service perimeters provides a security boundary to prevent data exfiltration. Separate perimeters for dev, staging, prod provides appropriate isolation. Shared VPC allows communication between projects within the perimeter. The Cloud Function automaticaly adds new projects to the right perimeter via Terraform. This meets all requirements - security perimeter to prevent data exfiltration, communication between projects, and automatic perimeter assignment for new projects.
upvoted 1 times
...
ssk119
1 year, 7 months ago
just having vpc alone does not protect with data exfiltration. The correct answer is B
upvoted 1 times
desertlotus1211
1 year, 7 months ago
you'd have to re-create the projects as a Host VPC... can't do that... too much work
upvoted 1 times
...
...
[Removed]
1 year, 8 months ago
Selected Answer: C
"C" As others noted, VPC Service Controls are designed specifically to protect against the risks described in the question. Only one Service perimeter is needed which excludes "D". https://cloud.google.com/vpc-service-controls/docs/overview#benefits
upvoted 2 times
...
fad3r
2 years ago
This question is very old. The answer is VPC Service controls. Highly doubt this is still relevant.
upvoted 5 times
...
soltium
2 years, 6 months ago
Selected Answer: C
C. The keyword "prevent data exfiltration by malicious insiders or compromised code" is listed as the benefits of VPC service control https://cloud.google.com/vpc-service-controls/docs/overview#benefits Only C and D creates service perimeters, but D creates three and doesn't specify a bridge to connect those service perimeters so I choose C as the answer.
upvoted 4 times
...
AwesomeGCP
2 years, 6 months ago
Selected Answer: C
C. Use an infrastructure-as-code software tool to set up a single service perimeter and to deploy a Cloud Function that monitors the "implementation" folder via Stackdriver and Cloud Pub/Sub. When the function notices that a new project is added to the folder, it executes Terraform to add the new project to the associated perimeter.
upvoted 1 times
...
cloudprincipal
2 years, 10 months ago
Selected Answer: C
eshtanaka is right: https://github.com/terraform-google-modules/terraform-google-vpc-service-controls/tree/master/examples/automatic_folder
upvoted 3 times
...
sudarchary
3 years, 2 months ago
Answer is A. Please focus on "security perimeter" and "compromised code".
upvoted 1 times
...
eshtanaka
3 years, 5 months ago
Correct answer is C. See the description for "automatically secured folder" https://github.com/terraform-google-modules/terraform-google-vpc-service-controls/tree/master/examples/automatic_folder
upvoted 3 times
...
nilb94
3 years, 7 months ago
Think it should be C. Access Context Manager docs say it is for ingress. Service Controls seems correct for exfiltration, and projects must be allowed to communicate with each other so they need to be in a single service perimeter.
upvoted 3 times
...
desertlotus1211
4 years ago
Answer is B: https://cloud.google.com/access-context-manager/docs/overview You need to read the question AND Answer carefully before selecting. Answer A is in Answer B
upvoted 2 times
...
DebasishLowes
4 years ago
Ans : A. To make the communication between different projects, shared vpc is required.
upvoted 1 times
...

Question 91

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 91 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 91
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You need to provide a corporate user account in Google Cloud for each of your developers and operational staff who need direct access to GCP resources.
Corporate policy requires you to maintain the user identity in a third-party identity management provider and leverage single sign-on. You learn that a significant number of users are using their corporate domain email addresses for personal Google accounts, and you need to follow Google recommended practices to convert existing unmanaged users to managed accounts.
Which two actions should you take? (Choose two.)

  • A. Use Google Cloud Directory Sync to synchronize your local identity management system to Cloud Identity.
  • B. Use the Google Admin console to view which managed users are using a personal account for their recovery email.
  • C. Add users to your managed Google account and force users to change the email addresses associated with their personal accounts.
  • D. Use the Transfer Tool for Unmanaged Users (TTUU) to find users with conflicting accounts and ask them to transfer their personal Google accounts.
  • E. Send an email to all of your employees and ask those users with corporate email addresses for personal Google accounts to delete the personal accounts immediately.
Show Suggested Answer Hide Answer
Suggested Answer: AD 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
VicF
Highly Voted 2 years, 11 months ago
A&D. A- Requires third-party IDp and wants to leverage single sign-on. D- https://cloud.google.com/architecture/identity/migrating-consumer-accounts#initiating_a_transfer "In addition to showing you all unmanaged accounts, the transfer tool for unmanaged users lets you initiate an account transfer by sending an account transfer request."
upvoted 17 times
...
skshak
Highly Voted 3 years, 6 months ago
Is the answer is A,D A - Requirement is third-party identity management provider and leverage single sign-on. D - https://cloud.google.com/architecture/identity/assessing-existing-user-accounts (Use the transfer tool for unmanaged users to identify consumer accounts that use an email address that matches one of the domains you've added to Cloud Identity or G Suite.)
upvoted 8 times
...
dsafeqf
Most Recent 6 months, 2 weeks ago
C, D are correct - https://cloud.google.com/architecture/identity/assessing-existing-user-accounts
upvoted 1 times
...
Littleivy
1 year, 5 months ago
Selected Answer: AD
A to sync IdP D to transfer unmanaged accounts
upvoted 3 times
...
AzureDP900
1 year, 5 months ago
AD is right
upvoted 2 times
...
AwesomeGCP
1 year, 6 months ago
Selected Answer: AD
A. Use Google Cloud Directory Sync to synchronize your local identity management system to Cloud Identity. D. Use the Transfer Tool for Unmanaged Users (TTUU) to find users with conflicting accounts and ask them to transfer their personal Google accounts.
upvoted 4 times
...
cloudprincipal
1 year, 10 months ago
Selected Answer: AD
see other comments
upvoted 3 times
...
sudarchary
2 years, 2 months ago
Answers are: A&C https://cloud.google.com/architecture/identity/assessing-existing-user-accounts
upvoted 1 times
...
CloudTrip
3 years, 1 month ago
The keyword is here "convert" follow Google recommended practices to convert existing unmanaged users to managed accounts. So why sync unmanaged with Cloud Identity. I would prefer Answers C and D
upvoted 2 times
ThisisJohn
2 years, 3 months ago
But dont forget about "Corporate policy requires you to maintain the user identity in a third-party identity management provider". I believe that makes it A and D
upvoted 1 times
...
...
mikelabs
3 years, 4 months ago
Answer is C,D. From GSuite Console you can do both.
upvoted 2 times
...
[Removed]
3 years, 5 months ago
Ans - AD
upvoted 4 times
[Removed]
3 years, 5 months ago
https://cloud.google.com/architecture/identity/migrating-consumer-accounts#initiating_a_transfer
upvoted 7 times
...
...
saurabh1805
3 years, 5 months ago
A, D is correct answer
upvoted 4 times
...
lordb
3 years, 6 months ago
https://cloud.google.com/architecture/identity/assessing-existing-user-accounts
upvoted 2 times
...

Question 92

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 92 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 92
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are on your company's development team. You noticed that your web application hosted in staging on GKE dynamically includes user data in web pages without first properly validating the inputted data. This could allow an attacker to execute gibberish commands and display arbitrary content in a victim user's browser in a production environment.
How should you prevent and fix this vulnerability?

  • A. Use Cloud IAP based on IP address or end-user device attributes to prevent and fix the vulnerability.
  • B. Set up an HTTPS load balancer, and then use Cloud Armor for the production environment to prevent the potential XSS attack.
  • C. Use Web Security Scanner to validate the usage of an outdated library in the code, and then use a secured version of the included library.
  • D. Use Web Security Scanner in staging to simulate an XSS injection attack, and then use a templating system that supports contextual auto-escaping.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
sudarchary
Highly Voted 2 years, 2 months ago
Selected Answer: D
Option D is correct as using web security scanner will allow to detect the vulnerability and templating system
upvoted 10 times
...
deardeer
Highly Voted 3 years, 2 months ago
Answer is D. There is mention about simulating in Web Security Scanner. "Web Security Scanner cross-site scripting (XSS) injection testing *simulates* an injection attack by inserting a benign test string into user-editable fields and then performing various user actions." https://cloud.google.com/security-command-center/docs/how-to-remediate-web-security-scanner-findings#xss
upvoted 7 times
AzureDP900
1 year, 5 months ago
Agree with D
upvoted 2 times
...
ThisisJohn
2 years, 3 months ago
Agree. Also from your link "There are various ways to fix this problem. The recommended fix is to escape all output and use a templating system that supports contextual auto-escaping." So escaping is a way to fix the issue, which is required by the question
upvoted 1 times
...
...
[Removed]
Most Recent 8 months, 3 weeks ago
Selected Answer: D
"D" Using Web Security Scanner in Security Command Center to find XSS vulnerabilities. This page explains recommended mitigation techniques such as using contextual auto-escaping. https://cloud.google.com/security-command-center/docs/how-to-remediate-web-security-scanner-findings#xss
upvoted 2 times
...
AwesomeGCP
1 year, 6 months ago
Selected Answer: D
D. Use Web Security Scanner in staging to simulate an XSS injection attack, and then use a templating system that supports contextual auto-escaping.
upvoted 2 times
...
tangac
1 year, 7 months ago
Selected Answer: D
clear D everything is explicated here : https://cloud.google.com/security-command-center/docs/how-to-remediate-web-security-scanner-findings Web Security Scanner cross-site scripting (XSS) injection testing simulates an injection attack by inserting a benign test string into user-editable fields and then performing various user actions. Custom detectors observe the browser and DOM during this test to determine whether an injection was successful and assess its potential for exploitation. There are various ways to fix this issue. The recommended fix is to escape all output and use a templating system that supports contextual auto-escaping.
upvoted 2 times
...
Lancyqusa
2 years, 3 months ago
It should be C because the web security scanner will identify the library known to contain the security issue as in the examples here - https://cloud.google.com/security-command-center/docs/how-to-use-web-security-scanner#example_findings . Once the security issue is identified, the vulnerability can be fixed by a secure version of that library.
upvoted 1 times
...
DebasishLowes
3 years ago
Ans : D
upvoted 2 times
...
pyc
3 years, 2 months ago
C, D is wrong, as Security Scanner can't "simulate" anything. It's a scanner. B is not right, as Armor can't do input data validation, it just deny/allow IP/CIDR.
upvoted 1 times
desertlotus1211
3 years ago
Yes it can simulate... Read the documentation first...
upvoted 3 times
...
...
KarVaid
3 years, 3 months ago
https://cloud.google.com/security-command-center/docs/concepts-web-security-scanner-overview Security Scanner should be able to scan for XSS vulnerabilities as well. Option D is better.
upvoted 2 times
KarVaid
3 years, 3 months ago
Cloud armor can prevent the vulnerability but to fix it, you would need Security scanner.
upvoted 1 times
...
...
Fellipo
3 years, 5 months ago
B , https://cloud.google.com/armor
upvoted 5 times
...
[Removed]
3 years, 5 months ago
Ans - D
upvoted 3 times
...
HectorLeon2099
3 years, 6 months ago
Answer is B. Web Security Scanner can look for XSS vulnerabilities but can't simulate XSS injection attack. https://cloud.google.com/armor/docs/rule-tuning#cross-site_scripting_xss
upvoted 3 times
FatCharlie
3 years, 4 months ago
Web Security Scanner does appear to be able to simulate an XSS attack. "Web Security Scanner cross-site scripting (XSS) injection testing simulates an injection attack by inserting a benign test string into user-editable fields and then performing various user actions. Custom detectors observe the browser and DOM during this test to determine whether an injection was successful and assess its potential for exploitation." https://cloud.google.com/security-command-center/docs/how-to-remediate-web-security-scanner-findings#remediate-findings
upvoted 4 times
...
saurabh1805
3 years, 5 months ago
Agree B is correct answer here.
upvoted 2 times
...
...
Jerrard
3 years, 6 months ago
D. https://cloud.google.com/security-command-center/docs/concepts-web-security-scanner-overview
upvoted 4 times
...

Question 93

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 93 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 93
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are part of a security team that wants to ensure that a Cloud Storage bucket in Project A can only be readable from Project B. You also want to ensure that data in the Cloud Storage bucket cannot be accessed from or copied to Cloud Storage buckets outside the network, even if the user has the correct credentials.
What should you do?

  • A. Enable VPC Service Controls, create a perimeter with Project A and B, and include Cloud Storage service.
  • B. Enable Domain Restricted Sharing Organization Policy and Bucket Policy Only on the Cloud Storage bucket.
  • C. Enable Private Access in Project A and B networks with strict firewall rules to allow communication between the networks.
  • D. Enable VPC Peering between Project A and B networks with strict firewall rules to allow communication between the networks.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
FatCharlie
Highly Voted 3 years, 4 months ago
The answer is A. This is question is covered by an example given for VPC Service Perimeters https://cloud.google.com/vpc-service-controls/docs/overview#isolate
upvoted 20 times
AzureDP900
1 year, 5 months ago
A is right
upvoted 2 times
...
...
[Removed]
Most Recent 8 months, 3 weeks ago
Selected Answer: A
"A" VPC Service controls were created for this type of use case. https://cloud.google.com/vpc-service-controls/docs/overview#isolate
upvoted 2 times
...
alleinallein
1 year ago
Why not D?
upvoted 1 times
...
shayke
1 year, 3 months ago
Selected Answer: A
A - a classic VPCSC question
upvoted 2 times
...
AwesomeGCP
1 year, 6 months ago
Selected Answer: A
A. Enable VPC Service Controls, create a perimeter with Project A and B, and include Cloud Storage service.
upvoted 3 times
...
cloudprincipal
1 year, 10 months ago
Selected Answer: A
https://cloud.google.com/vpc-service-controls/docs/overview#isolate
upvoted 2 times
...
nilb94
2 years, 7 months ago
A - VPC Service Controls
upvoted 3 times
...
jeeet_
2 years, 10 months ago
Answer is most positively A. VPC service controls lets Security team create fine-grained Perimeter across projects within organization. -> Security perimeter for API-Based services like Bigtable instances, Storage and Bigquery datasets.. are a kind of super powers for VPC Service control. well in my test, I chose option B, but Domain Restricted Organization policies are for limiting resource sharing based on domain. so if you're out in internet, and have credentials you still can access resources based on your domain access level. So B option is wrong.
upvoted 2 times
...
HateMicrosoft
3 years ago
The correct answer is: A This is obtained by the VPC Service Controls by the perimeter setup. Overview of VPC Service Controls https://cloud.google.com/vpc-service-controls/docs/overview
upvoted 2 times
...
jonclem
3 years, 4 months ago
I would say option A is a better fit due to VPC Service Controls.
upvoted 3 times
...
jonclem
3 years, 4 months ago
I'd be inclined to agree, option B seems a better fit. Here's my reasoning behind it: https://cloud.google.com/access-context-manager/docs/overview
upvoted 1 times
jonclem
3 years, 4 months ago
please ignore this comment, wrong question.
upvoted 1 times
...
...
saurabh1805
3 years, 5 months ago
what is being asked is data exfiltration as well and which can be only achieved via VPC permiter and created a bridge between both project.
upvoted 1 times
Ducle
3 years, 5 months ago
A is better
upvoted 2 times
...
...
[Removed]
3 years, 5 months ago
Ans - B
upvoted 1 times
...
Jerrard
3 years, 6 months ago
B. https://cloud.google.com/resource-manager/docs/organization-policy/restricting-domains
upvoted 1 times
...

Question 94

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 94 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 94
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are responsible for protecting highly sensitive data in BigQuery. Your operations teams need access to this data, but given privacy regulations, you want to ensure that they cannot read the sensitive fields such as email addresses and first names. These specific sensitive fields should only be available on a need-to- know basis to the Human Resources team. What should you do?

  • A. Perform data masking with the Cloud Data Loss Prevention API, and store that data in BigQuery for later use.
  • B. Perform data redaction with the Cloud Data Loss Prevention API, and store that data in BigQuery for later use.
  • C. Perform data inspection with the Cloud Data Loss Prevention API, and store that data in BigQuery for later use.
  • D. Perform tokenization for Pseudonymization with the Cloud Data Loss Prevention API, and store that data in BigQuery for later use.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
AwesomeGCP
Highly Voted 1 year, 6 months ago
Selected Answer: D
D. Perform tokenization for Pseudonymization with the Cloud Data Loss Prevention API, and store that data in BigQuery for later use.
upvoted 5 times
...
zellck
Highly Voted 1 year, 6 months ago
Selected Answer: D
D is the answer as tokenization can support re-identification for use by HR. https://cloud.google.com/dlp/docs/pseudonymization
upvoted 5 times
...
[Removed]
Most Recent 8 months, 3 weeks ago
Selected Answer: D
"D" Out of all the options listed, pseudonymization is the only reversible method which is one of the requirements in the quest. https://cloud.google.com/dlp/docs/transformations-reference#transformation_methods https://cloud.google.com/dlp/docs/pseudonymization
upvoted 3 times
...
Sammydp202020
1 year, 2 months ago
Selected Answer: D
Both A & D will do the job. But, A is preferred as the data is PII and needs to be secure. https://cloud.google.com/dlp/docs/pseudonymization#how-tokenization-works Why A is not a apt response: https://cloud.google.com/bigquery/docs/column-data-masking-intro The SHA-256 function used in data masking is type preserving, so the hash value it returns has the same data type as the column value. SHA-256 is a deterministic hashing function; an initial value always resolves to the same hash value. However, it does not require encryption keys. This makes it possible for a malicious actor to use a brute force attack to determine the original value, by running all possible original values through the SHA-256 algorithm and seeing which one produces a hash that matches the hash returned by data masking.
upvoted 1 times
...
pedrojorge
1 year, 2 months ago
Selected Answer: D
D, as tokenization supports re-identification for the HR team
upvoted 2 times
...
therealsohail
1 year, 2 months ago
B is okay Data redaction, as opposed to data masking or tokenization, completely removes or replaces the sensitive fields, making it so that the operations teams cannot see the sensitive information. This ensures that the sensitive data is only available to the Human Resources team on a need-to-know basis, as per the privacy regulations. The Cloud Data Loss Prevention API is able to inspect and redact data, making it a suitable choice for this task.
upvoted 2 times
...
AzureDP900
1 year, 5 months ago
D is correct Pseudonymization is a de-identification technique that replaces sensitive data values with cryptographically generated tokens. Pseudonymization is widely used in industries like finance and healthcare to help reduce the risk of data in use, narrow compliance scope, and minimize the exposure of sensitive data to systems while preserving data utility and accuracy.
upvoted 4 times
...
Random_Mane
1 year, 6 months ago
Selected Answer: A
A https://cloud.google.com/bigquery/docs/column-data-masking-intro
upvoted 3 times
heftjustice
1 year, 3 months ago
Data masking doesn't need DLP.
upvoted 2 times
...
...

Question 95

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 95 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 95
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are a Security Administrator at your organization. You need to restrict service account creation capability within production environments. You want to accomplish this centrally across the organization. What should you do?

  • A. Use Identity and Access Management (IAM) to restrict access of all users and service accounts that have access to the production environment.
  • B. Use organization policy constraints/iam.disableServiceAccountKeyCreation boolean to disable the creation of new service accounts.
  • C. Use organization policy constraints/iam.disableServiceAccountKeyUpload boolean to disable the creation of new service accounts.
  • D. Use organization policy constraints/iam.disableServiceAccountCreation boolean to disable the creation of new service accounts.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Tabayashi
Highly Voted 1 year, 11 months ago
Answer is (D). You can use the iam.disableServiceAccountCreation boolean constraint to disable the creation of new service accounts. This allows you to centralize management of service accounts while not restricting the other permissions your developers have on projects. https://cloud.google.com/resource-manager/docs/organization-policy/restricting-service-accounts#disable_service_account_creation
upvoted 11 times
...
[Removed]
Highly Voted 8 months, 3 weeks ago
Selected Answer: D
"D" Refreshing tabayashi's comment. https://cloud.google.com/resource-manager/docs/organization-policy/restricting-service-accounts#disable_service_account_creation
upvoted 5 times
...
TNT87
Most Recent 1 year ago
Selected Answer: D
Answer D You can use the iam.disableServiceAccountCreation boolean constraint to disable the creation of new service accounts. This allows you to centralize management of service accounts while not restricting the other permissions your developers have on projects.
upvoted 1 times
...
pskm12
1 year, 2 months ago
In the question, it is clearly mentioned that -> You want to accomplish this centrally across the organization. So, it would obviously be D
upvoted 1 times
...
gupta3
1 year, 3 months ago
Selected Answer: A
Are they not conflicting - restricting service account creation capability within production environments & enforcing policy across Org ?
upvoted 1 times
...
AzureDP900
1 year, 5 months ago
D is correct
upvoted 2 times
...
AwesomeGCP
1 year, 6 months ago
Selected Answer: D
D. Use organization policy constraints/iam.disableServiceAccountCreation boolean to disable the creation of new service accounts.
upvoted 2 times
...
zellck
1 year, 6 months ago
Selected Answer: D
D is the answer.
upvoted 2 times
...

Question 96

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 96 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 96
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are the project owner for a regulated workload that runs in a project you own and manage as an Identity and Access Management (IAM) admin. For an upcoming audit, you need to provide access reviews evidence. Which tool should you use?

  • A. Policy Troubleshooter
  • B. Policy Analyzer
  • C. IAM Recommender
  • D. Policy Simulator
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
mouchu
Highly Voted 1 year, 4 months ago
Answer = B https://cloud.google.com/policy-intelligence/docs/policy-analyzer-overview
upvoted 10 times
...
sumundada
Highly Voted 1 year, 2 months ago
Selected Answer: B
https://cloud.google.com/policy-intelligence/docs/policy-analyzer-overview
upvoted 5 times
...
rwintrob
Most Recent 8 months ago
B policy analyzer is the correct answer
upvoted 2 times
...
AzureDP900
11 months, 1 week ago
B policy analyzer is correct
upvoted 1 times
...
AwesomeGCP
1 year ago
Selected Answer: B
B. Policy Analyzer
upvoted 2 times
...
zellck
1 year ago
Selected Answer: B
B is the answer. https://cloud.google.com/policy-intelligence/docs/policy-analyzer-overview Policy Analyzer lets you find out which principals (for example, users, service accounts, groups, and domains) have what access to which Google Cloud resources based on your IAM allow policies.
upvoted 3 times
...
cloudprincipal
1 year, 4 months ago
Selected Answer: B
https://cloud.google.com/policy-intelligence/docs/policy-analyzer-overview
upvoted 5 times
...
szl0144
1 year, 4 months ago
B is correct, guys
upvoted 4 times
...

Question 97

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 97 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 97
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization has implemented synchronization and SAML federation between Cloud Identity and Microsoft Active Directory. You want to reduce the risk of
Google Cloud user accounts being compromised. What should you do?

  • A. Create a Cloud Identity password policy with strong password settings, and configure 2-Step Verification with security keys in the Google Admin console.
  • B. Create a Cloud Identity password policy with strong password settings, and configure 2-Step Verification with verification codes via text or phone call in the Google Admin console.
  • C. Create an Active Directory domain password policy with strong password settings, and configure post-SSO (single sign-on) 2-Step Verification with security keys in the Google Admin console.
  • D. Create an Active Directory domain password policy with strong password settings, and configure post-SSO (single sign-on) 2-Step Verification with verification codes via text or phone call in the Google Admin console.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
coco10k
Highly Voted 1 year, 5 months ago
Answer C: "We recommend against using text messages. The National Institute of Standards and Technology (NIST) no longer recommends SMS-based 2SV due to the hijacking risk from state-sponsored entities."
upvoted 6 times
gcpengineer
11 months ago
user account doesnt need admin console access
upvoted 1 times
...
...
uiuiui
Most Recent 5 months ago
Selected Answer: C
"C" Please
upvoted 2 times
...
[Removed]
8 months, 3 weeks ago
Selected Answer: C
"C" Because it's federated access, the password policy stays with the origin IDP (Active Directory in this case) while the post-sso behavior/controls are in Google Cloud. In terms of the actual second factor, security keys are far more secure than otp via text since those can be defeated through smishing or other types of attacks. https://cloud.google.com/architecture/identity/federating-gcp-with-active-directory-introduction#implementing_federation https://cloud.google.com/identity/solutions/enforce-mfa#use_security_keys
upvoted 4 times
...
AwesomeGCP
1 year, 6 months ago
Selected Answer: C
C. Create an Active Directory domain password policy with strong password settings, and configure post-SSO (single sign-on) 2-Step Verification with security keys in the Google Admin console.
upvoted 3 times
...
jitu028
1 year, 6 months ago
Answer is - C https://cloud.google.com/identity/solutions/enforce-mfa#use_security_keys Use security keys We recommend requiring security keys for those employees who create and access data that needs the highest level of security. You should require 2SV for all other employees and encourage them to use security keys. Security keys offer the most secure form of 2SV. They are based on the open standard developed by Google as part of the Fast Identity Online (FIDO) Alliance. Security keys require a compatible browser on user devices.
upvoted 2 times
AzureDP900
1 year, 5 months ago
Agree with C and explanation
upvoted 1 times
...
...
szl0144
1 year, 10 months ago
C is the answer because security key is securer than 2FA code
upvoted 4 times
...
mT3
1 year, 10 months ago
Selected Answer: C
C:correct answer
upvoted 4 times
...
mouchu
1 year, 11 months ago
Answer = B
upvoted 1 times
...

Question 98

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 98 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 98
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You have been tasked with implementing external web application protection against common web application attacks for a public application on Google Cloud.
You want to validate these policy changes before they are enforced. What service should you use?

  • A. Google Cloud Armor's preconfigured rules in preview mode
  • B. Prepopulated VPC firewall rules in monitor mode
  • C. The inherent protections of Google Front End (GFE)
  • D. Cloud Load Balancing firewall rules
  • E. VPC Service Controls in dry run mode
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Tabayashi
Highly Voted 2 years, 5 months ago
Answer is (A). You can preview the effects of a rule without enforcing it. In preview mode, actions are noted in Cloud Monitoring. You can choose to preview individual rules in a security policy, or you can preview every rule in the policy. https://cloud.google.com/armor/docs/security-policy-overview#preview_mode
upvoted 10 times
AzureDP900
1 year, 11 months ago
A is right
upvoted 1 times
...
...
tia_gll
Most Recent 6 months, 3 weeks ago
Selected Answer: A
ans is A
upvoted 1 times
...
[Removed]
1 year, 2 months ago
Selected Answer: A
"A" Web Application Firewall (Cloud Armor) is the answer here with preview mode. https://cloud.google.com/armor/docs/security-policy-overview#preview_mode
upvoted 2 times
...
AwesomeGCP
2 years ago
Selected Answer: A
A. Google Cloud Armor's preconfigured rules in preview mode
upvoted 2 times
...
sumundada
2 years, 2 months ago
Selected Answer: A
Answer is (A).
upvoted 2 times
...

Question 99

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 99 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 99
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are asked to recommend a solution to store and retrieve sensitive configuration data from an application that runs on Compute Engine. Which option should you recommend?

  • A. Cloud Key Management Service
  • B. Compute Engine guest attributes
  • C. Compute Engine custom metadata
  • D. Secret Manager
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Tabayashi
Highly Voted 2 years, 11 months ago
Answer is (D). Secret Manager is a secure and convenient storage system for API keys, passwords, certificates, and other sensitive data. Secret Manager provides a central place and single source of truth to manage, access, and audit secrets across Google Cloud. https://cloud.google.com/secret-manager
upvoted 13 times
...
cloudprincipal
Highly Voted 2 years, 10 months ago
Selected Answer: D
You need a secrets management solution https://cloud.google.com/secret-manager
upvoted 5 times
cloudprincipal
2 years, 10 months ago
Sorry, this should be C
upvoted 1 times
badrik
2 years, 10 months ago
sensitive information can never be stored/retrieved through custom meta data !
upvoted 4 times
...
...
...
BPzen
Most Recent 4 months, 1 week ago
Selected Answer: D
Explanation: Secret Manager is the recommended solution for storing and retrieving sensitive configuration data in Google Cloud. It is purpose-built for managing sensitive information like API keys, passwords, and other secrets securely, with robust access control and encryption.
upvoted 1 times
...
tia_gll
1 year ago
Selected Answer: D
ans is D
upvoted 1 times
...
dija123
1 year, 1 month ago
Selected Answer: D
Secret Manager
upvoted 1 times
...
[Removed]
1 year, 8 months ago
Selected Answer: D
"D" There's ambiguity in the question in terms of what type of configuration data we're talking about and how large. Even though the compute metadata server can hold sensitive values like ssh keys, there are limitations with respect to how much data you can put in there (reference A below). Secret manager also has a size limit on how much you can store. (reference B below). However, secret manager is explicitly said to be a good use case for Sensitive Configuration information (reference C below) which makes it the preferred answer. References: A- https://cloud.google.com/compute/docs/metadata/setting-custom-metadata#limitations B- https://cloud.google.com/secret-manager/quotas C- https://cloud.google.com/secret-manager/docs/overview#secret_manager
upvoted 3 times
...
AzureDP900
2 years, 5 months ago
D is correct
upvoted 2 times
...
AwesomeGCP
2 years, 6 months ago
Selected Answer: D
D. Secret Manager
upvoted 2 times
...

Question 100

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 100 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 100
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You need to implement an encryption at-rest strategy that reduces key management complexity for non-sensitive data and protects sensitive data while providing the flexibility of controlling the key residency and rotation schedule. FIPS 140-2 L1 compliance is required for all data types. What should you do?

  • A. Encrypt non-sensitive data and sensitive data with Cloud External Key Manager.
  • B. Encrypt non-sensitive data and sensitive data with Cloud Key Management Service
  • C. Encrypt non-sensitive data with Google default encryption, and encrypt sensitive data with Cloud External Key Manager.
  • D. Encrypt non-sensitive data with Google default encryption, and encrypt sensitive data with Cloud Key Management Service.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Chute5118
Highly Voted 2 years, 8 months ago
Selected Answer: D
Both B and D seem correct tbh. D might be "more correct" depending on the interpretation. "reduces key management complexity for non-sensitive data" - Google default encryption "protects sensitive data while providing the flexibility of controlling the key residency and rotation schedule" - Customer Managed Key
upvoted 6 times
AzureDP900
2 years, 5 months ago
I agree, D is right
upvoted 2 times
...
...
zellck
Highly Voted 2 years, 6 months ago
Selected Answer: D
D is the answer.
upvoted 5 times
...
Zek
Most Recent 4 months, 1 week ago
Selected Answer: D
https://cloud.google.com/kms/docs/key-management-service#choose For example, you might use software keys for your least sensitive data and hardware or external keys for your most sensitive data. FIPS 140-2 Level 1 validated applies to both Google default encryption and Cloud Key Management Service (KMS)
upvoted 1 times
...
dija123
1 year, 1 month ago
Selected Answer: D
D. Encrypt non-sensitive data with Google default encryption, and encrypt sensitive data with Cloud Key Management Service (KMS)
upvoted 1 times
...
MHD84
1 year, 7 months ago
corrcet Answer is D, both KMS and default encryption are FIPS 140-2 L1 compliance https://cloud.google.com/kms/docs/key-management-service#choose
upvoted 3 times
...
[Removed]
1 year, 8 months ago
Selected Answer: D
"D" Default encryption is Fips 140-2 L2 compliant (reference A below). Cloud KMS provides the rotation convenience desired (reference B below). References: A- https://cloud.google.com/docs/security/encryption/default-encryption B- https://cloud.google.com/docs/security/key-management-deep-dive
upvoted 3 times
...
passex
2 years, 3 months ago
"reduces key management" & "FIPS 140-2 L1 compliance is required for all data types" - strongly suggests answer B
upvoted 1 times
...
rrvv
2 years, 6 months ago
As FIPS 140-2 L1 compliance is required for all types of data, Cloud KMS should be used to manage encryption. Correct answer is B https://cloud.google.com/docs/security/key-management-deep-dive#software-protection-level:~:text=The%20Cloud%20KMS%20binary%20is%20built%20against%20FIPS%20140%2D2%20Level%201%E2%80%93validated%20Cryptographic%20Primitives%20of%20this%20module
upvoted 1 times
...
sumundada
2 years, 8 months ago
Selected Answer: D
Google uses a common cryptographic library, Tink, which incorporates our FIPS 140-2 Level 1 validated module, BoringCrypto, to implement encryption consistently across almost all Google Cloud products. To provideflexibility of controlling the key residency and rotation schedule, use google provided key for non-sensitive and encrypt sensitive data with Cloud Key Management Service
upvoted 3 times
...
nacying
2 years, 10 months ago
Selected Answer: B
base on "FIPS 140-2 L1 compliance is required for all data types"
upvoted 3 times
...
cloudprincipal
2 years, 10 months ago
Selected Answer: D
KMS is ok for fips 140-2 level 1 https://cloud.google.com/docs/security/key-management-deep-dive#platform-overview
upvoted 2 times
cloudprincipal
2 years, 10 months ago
Regarding FIPS 140-2 level 1 and GCP default encryption: Google Cloud uses a FIPS 140-2 validated Level 1 encryption module (certificate 3318) in our production environment. https://cloud.google.com/docs/security/encryption/default-encryption?hl=en#encryption_of_data_at_rest
upvoted 2 times
...
...
mikesp
2 years, 10 months ago
In my opinion, the answer is B. The question says that it is necessary to control "key residency and rotation schedule" for both types of data. Default encryption at rest does not provide that but Cloud KMS does. Furthermore, Cloud KMS is FIPS140-2 level 1. https://cloud.google.com/docs/security/key-management-deep-dive
upvoted 3 times
csrazdan
2 years, 3 months ago
The answer is D. 1. reduce key management complexity for non-sensitive data --> Google Managed key 2. protects sensitive data while providing the flexibility of controlling the key residency and rotation schedule --> KMS
upvoted 1 times
...
...
szl0144
2 years, 10 months ago
D is the wander
upvoted 3 times
...
mouchu
2 years, 11 months ago
Answer = D
upvoted 3 times
...

Question 101

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 101 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 101
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your company wants to determine what products they can build to help customers improve their credit scores depending on their age range. To achieve this, you need to join user information in the company's banking app with customers' credit score data received from a third party. While using this raw data will allow you to complete this task, it exposes sensitive data, which could be propagated into new systems.
This risk needs to be addressed using de-identification and tokenization with Cloud Data Loss Prevention while maintaining the referential integrity across the database. Which cryptographic token format should you use to meet these requirements?

  • A. Deterministic encryption
  • B. Secure, key-based hashes
  • C. Format-preserving encryption
  • D. Cryptographic hashing
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
mT3
Highly Voted 2 years, 10 months ago
Selected Answer: A
”This encryption method is reversible, which helps to maintain referential integrity across your database and has no character-set limitations.” https://cloud.google.com/blog/products/identity-security/take-charge-of-your-data-how-tokenization-makes-data-usable-without-sacrificing-privacy
upvoted 11 times
[Removed]
1 year, 8 months ago
I meant both A and C not A and D.
upvoted 1 times
...
AzureDP900
2 years, 5 months ago
A is right
upvoted 1 times
...
...
YourFriendlyNeighborhoodSpider
Most Recent 3 weeks, 6 days ago
Selected Answer: C
C. Format-preserving encryption Justification Based on Documentation: https://cloud.google.com/dlp/docs/transformations-reference#transformation_methods According to the Google Cloud DLP guidelines, format-preserving encryption (FPE) transforms sensitive data while keeping its original format. This is essential for working with structured data where you need to maintain the integrity of data types (e.g., keeping a credit score as a numeric field) while ensuring security through encryption. The ability to join user information in the banking app with credit score data while preserving the structure and format of the data is critical, especially since the goal is to analyze the data without exposing sensitive information.
upvoted 1 times
...
BPzen
4 months, 1 week ago
Selected Answer: C
Why C. Format-preserving encryption is correct: Format-preserving encryption (FPE) encrypts data while preserving its format (e.g., encrypting a credit card number would still result in a string with the same length and structure). It ensures that data relationships and referential integrity across systems remain intact. FPE is supported by Google Cloud DLP for tokenization tasks. Why not the other options: A. Deterministic encryption: Deterministic encryption ensures that the same plaintext always encrypts to the same ciphertext, which can preserve referential integrity. However, it doesn't inherently maintain the format of the original data, which might be a requirement in this case.
upvoted 2 times
YourFriendlyNeighborhoodSpider
3 weeks, 6 days ago
YES, C is correct: https://cloud.google.com/dlp/docs/transformations-reference#transformation_methods Format preserving encryption: Replaces an input value with a token that has been generated using format-preserving encryption (FPE) with the FFX mode of operation. This transformation method produces a token that is limited to the same alphabet as the input value and is the same length as the input value. FPE also supports re-identification given the original encryption key. -> The key is that we talk about tokenization.
upvoted 1 times
...
...
rsamant
1 year, 4 months ago
D Cryptogrpahic hashing as it maintains refenrtial integrity and not reversible https://cloud.google.com/dlp/docs/pseudonymization
upvoted 3 times
...
Xoxoo
1 year, 6 months ago
Selected Answer: A
To meet the requirements of de-identifying and tokenizing sensitive data while maintaining referential integrity across the database, you should use "Deterministic encryption." Deterministic encryption is a form of encryption where the same input value consistently produces the same encrypted output (token). This ensures referential integrity because the same original value will always result in the same token, allowing you to link and join data across different systems or databases while still protecting sensitive information. Format-preserving encryption is a specific form of deterministic encryption that preserves the format and length of the original data, which can be useful for maintaining data structures and relationships. So, the correct option is: A. Deterministic encryption
upvoted 2 times
...
[Removed]
1 year, 8 months ago
Selected Answer: A
"A" Requirements are reversible while maintaining referential integrity. Both A and D meet this requirement however D has input limitations. Therefore A is a better answer. https://cloud.google.com/dlp/docs/transformations-reference#transformation_methods
upvoted 1 times
...
danidee111
1 year, 10 months ago
This is a poor question and not enough data is provided to determine which Tokenization method should be selected. There are three methods for Tokenization (also referred to as Pseudonymization). See: https://cloud.google.com/dlp/docs/transformations-reference#crypto and each method maintains referential integrity See: https://www.youtube.com/watch?v=h0BnA7R8vg4. Thus, you'd need to know whether it needs to be reversible, format preserving to confidentially select an answer..
upvoted 3 times
...
gcpengineer
1 year, 10 months ago
Selected Answer: A
https://cloud.google.com/blog/products/identity-security/take-charge-of-your-data-how-tokenization-makes-data-usable-without-sacrificing-privacy
upvoted 1 times
...
passex
2 years, 3 months ago
"Deterministic encryption" is too wide definition, the key phrase is "Which cryptographic token format " so th answer is "Format-preserving encryption" - where Referential integrity is assured (...allows for records to maintain their relationship ....ensures that connections between values (and, with structured data, records) are preserved, even across tables)
upvoted 1 times
gcpengineer
1 year, 10 months ago
A is the ans. https://cloud.google.com/blog/products/identity-security/take-charge-of-your-data-how-tokenization-makes-data-usable-without-sacrificing-privacy
upvoted 1 times
...
...
PST21
2 years, 3 months ago
Cryptographic uses strings , it asks to use tokenization and hence deterministic is better than FPE hence A
upvoted 1 times
gcpengineer
1 year, 10 months ago
both create tokens, the FPE is more used where u have format [0-9a-za-Z]
upvoted 1 times
...
...
Littleivy
2 years, 5 months ago
Selected Answer: D
Though it's not clear, but, to prevent from data leak, it's better to have a non-reversible method as analysts don't need re-identification
upvoted 1 times
...
AwesomeGCP
2 years, 6 months ago
Selected Answer: A
A. Deterministic encryption
upvoted 1 times
...
zellck
2 years, 6 months ago
Selected Answer: A
A is the answer. https://cloud.google.com/dlp/docs/pseudonymization FPE provides fewer security guarantees compared to other deterministic encryption methods such as AES-SIV. For these reasons, Google strongly recommends using deterministic encryption with AES-SIV instead of FPE for all security sensitive use cases. Other methods like deterministic encryption using AES-SIV provide these stronger security guarantees and are recommended for tokenization use cases unless length and character set preservation are strict requirements—for example, for backward compatibility with a legacy data system.
upvoted 4 times
...
piyush_1982
2 years, 8 months ago
Selected Answer: A
This question is taken from the exact scenario described in this link https://cloud.google.com/blog/products/identity-security/take-charge-of-your-data-how-tokenization-makes-data-usable-without-sacrificing-privacy
upvoted 1 times
...
Chute5118
2 years, 8 months ago
Selected Answer: D
Both "Deterministic" and "format preserving" are key-based hashes (and reversible). It's not clear from the question, but doesn't look like we need it to be reversible. All of them maintain referential integrity https://cloud.google.com/architecture/de-identification-re-identification-pii-using-cloud-dlp#method_selection
upvoted 1 times
...
cloudprincipal
2 years, 10 months ago
Selected Answer: D
preserve referential integrity and ensure that no re-identification is possible https://cloud.google.com/dlp/docs/pseudonymization#supported-methods
upvoted 1 times
cloudprincipal
2 years, 9 months ago
forget it, it should be A.
upvoted 1 times
...
...
Taliesyn
2 years, 11 months ago
Selected Answer: D
Cryptographic hash (CryptoHashConfig) maintains referential integrity. "Determinist encryption" is not a transformation method. https://cloud.google.com/dlp/docs/transformations-reference
upvoted 2 times
...

Question 102

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 102 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 102
Topic #: 1
[All Professional Cloud Security Engineer Questions]

An office manager at your small startup company is responsible for matching payments to invoices and creating billing alerts. For compliance reasons, the office manager is only permitted to have the Identity and Access Management (IAM) permissions necessary for these tasks. Which two IAM roles should the office manager have? (Choose two.)

  • A. Organization Administrator
  • B. Project Creator
  • C. Billing Account Viewer
  • D. Billing Account Costs Manager
  • E. Billing Account User
Show Suggested Answer Hide Answer
Suggested Answer: CD 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
mT3
Highly Voted 2 years, 4 months ago
Selected Answer: CD
Ans C,D. C. Billing Account Viewer :responsible for matching payments to invoices https://cloud.google.com/billing/docs/how-to/get-invoice#required-permissions Access billing documents:"Billing Account Administrator" or "Billing Account Viewer" D. Billing Account Costs Manager : creating billing alerts https://cloud.google.com/billing/docs/how-to/budgets-notification-recipients "To create or modify a budget for your Cloud Billing account, you need the Billing Account Costs Manager role or the Billing Account Administrator role on the Cloud Billing account." and "If you want the recipients of the alert emails to be able to view the budget, email recipients need permissions on the Cloud Billing account. At a minimum, ensure email recipients are added to the Billing Account Viewer role on the Cloud Billing account that owns the budget. See View a list of budgets for additional information."
upvoted 15 times
GHOST1985
1 year, 11 months ago
the link you post talking about Permissions required to ACCESS billing documentsn not to link project to a billing account you should have the Billing Account User role, the good answer is D,E
upvoted 1 times
...
AzureDP900
1 year, 11 months ago
CD is right
upvoted 3 times
...
...
Taliesyn
Highly Voted 2 years, 5 months ago
Selected Answer: CD
Billing Account Costs Administrator to create budgets (aka alerts) Billing Account Viewer to view costs (to be able to match them to invoices)
upvoted 6 times
...
rottzy
Most Recent 1 year ago
Billing Account Costs Manager - does not exist! ?!
upvoted 1 times
winston9
8 months ago
yes, it does: https://cloud.google.com/iam/docs/understanding-roles#billing.costsManager
upvoted 1 times
...
...
desertlotus1211
1 year, 1 month ago
Answer: CD https://cloud.google.com/billing/docs/how-to/budgets
upvoted 1 times
...
[Removed]
1 year, 2 months ago
Selected Answer: CD
C,D BA Viewer to see spend info and BA Costs Manager to manage costs, create budgets and alerts BA User and BA Admin have permissions related to linking projects to billing etc. which are not needed. https://cloud.google.com/billing/docs/how-to/billing-access#ba-viewer https://cloud.google.com/billing/docs/how-to/billing-access
upvoted 2 times
...
GHOST1985
1 year, 11 months ago
Selected Answer: DE
Billing Account User: This role has very restricted permissions, so you can grant it broadly. When granted in combination with Project Creator, the two roles allow a user to create new projects linked to the billing account on which the Billing Account User role is granted. Or, when granted in combination with the Project Billing Manager role, the two roles allow a user to link and unlink projects on the billing account on which the Billing Account User role is granted. Billing Account Costs Manager: Create, edit, and delete budgets, view billing account cost information and transactions, and manage the export of billing cost data to BigQuery. Does not confer the right to export pricing data or view custom pricing in the Pricing page. Also, does not allow the linking or unlinking of projects or otherwise managing the properties of the billing account
upvoted 3 times
...
AwesomeGCP
2 years ago
Selected Answer: CD
C. Billing Account Viewer D. Billing Account Costs Manager
upvoted 2 times
...
zellck
2 years ago
Selected Answer: CD
CD is the answer. https://cloud.google.com/billing/docs/how-to/billing-access#overview-of-cloud-billing-roles-in-cloud-iam Billing Account Costs Manager (roles/billing.costsManager) - Manage budgets and view and export cost information of billing accounts (but not pricing information) Billing Account Viewer (roles/billing.viewer) - View billing account cost information and transactions.
upvoted 3 times
...

Question 103

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 103 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 103
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are designing a new governance model for your organization's secrets that are stored in Secret Manager. Currently, secrets for Production and Non-
Production applications are stored and accessed using service accounts. Your proposed solution must:
✑ Provide granular access to secrets
✑ Give you control over the rotation schedules for the encryption keys that wrap your secrets
✑ Maintain environment separation
✑ Provide ease of management
Which approach should you take?

  • A. 1. Use separate Google Cloud projects to store Production and Non-Production secrets. 2. Enforce access control to secrets using project-level identity and Access Management (IAM) bindings. 3. Use customer-managed encryption keys to encrypt secrets.
  • B. 1. Use a single Google Cloud project to store both Production and Non-Production secrets. 2. Enforce access control to secrets using secret-level Identity and Access Management (IAM) bindings. 3. Use Google-managed encryption keys to encrypt secrets.
  • C. 1. Use separate Google Cloud projects to store Production and Non-Production secrets. 2. Enforce access control to secrets using secret-level Identity and Access Management (IAM) bindings. 3. Use Google-managed encryption keys to encrypt secrets.
  • D. 1. Use a single Google Cloud project to store both Production and Non-Production secrets. 2. Enforce access control to secrets using project-level Identity and Access Management (IAM) bindings. 3. Use customer-managed encryption keys to encrypt secrets.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
mT3
Highly Voted 2 years, 10 months ago
Selected Answer: A
Correct. Ans A. Provide granular access to secrets: 2.Enforce access control to secrets using project-level identity and Access Management (IAM) bindings. Give you control over the rotation schedules for the encryption keys that wrap your secrets: 3. Use customer-managed encryption keys to encrypt secrets. Maintain environment separation: 1. Use separate Google Cloud projects to store Production and Non-Production secrets.
upvoted 13 times
mikesp
2 years, 10 months ago
It is possible to grant IAM bindind to secret-level which is more granular than project-level but considering that it is necessary to manage encryption keys life-cycle, then the answer is A due to C does not allow that.
upvoted 4 times
AzureDP900
2 years, 5 months ago
Yes , A is right
upvoted 1 times
...
...
...
Medofree
Highly Voted 2 years, 10 months ago
None of the answers are correct, here is why : ✑ Provide granular access to secrets => 2. Enforce access control to secrets using secret-level (and not project-level) ✑ Give you control over the rotation schedules for the encryption keys that wrap your secrets => 3. Use customer-managed encryption keys to encrypt secrets. ✑ Maintain environment separation => 1. Use separate Google Cloud projects to store Production and Non-Production secrets ✑ Provide ease of management => 3. Use Google-managed encryption keys to encrypt secrets. (could be in contradiction with Give you control over the rotation schedules….) It should be an E answer : E. 1. Use separate Google Cloud projects to store Production and Non-Production secrets. 2. Enforce access control to secrets using secret-level identity and Access Management (IAM) bindings. 3. Use customer-managed encryption keys to encrypt secrets.
upvoted 5 times
desertlotus1211
1 year, 7 months ago
That's Answer A....
upvoted 1 times
...
...
nah99
Most Recent 4 months, 3 weeks ago
Selected Answer: C
It's C, right? Answer A doesn't provide granular access. C still provides control over rotation, verify for yourself: Go to GCP Console -> Secrets Manager -> Create Secret -> Select Google Managed Encryption Key -> Enable "Set rotation period" and you will see the options
upvoted 1 times
...
glb2
1 year ago
Selected Answer: C
I think C is correct. Secrets granular management, separate projects and keys managements into google.
upvoted 1 times
...
[Removed]
1 year, 3 months ago
Selected Answer: C
For me this is answer C. It provides granular access control at the secret level. Option A provides project-level IAM bindings and not secret level. While it uses Google-managed keys (offering less control over rotation), it simplifies management and still maintains a good security posture. It maintains environment separation by using different projects for Production and Non-Production. Balances between ease of management and security, though slightly more complex due to separate projects.
upvoted 2 times
glb2
1 year ago
I think the same.
upvoted 1 times
...
...
AwesomeGCP
2 years, 6 months ago
Selected Answer: A
A. 1. Use separate Google Cloud projects to store Production and Non-Production secrets. 2. Enforce access control to secrets using project-level identity and Access Management (IAM) bindings. 3. Use customer-managed encryption keys to encrypt secrets.
upvoted 3 times
...

Question 104

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 104 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 104
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are a security engineer at a finance company. Your organization plans to store data on Google Cloud, but your leadership team is worried about the security of their highly sensitive data. Specifically, your company is concerned about internal Google employees' ability to access your company's data on Google Cloud.
What solution should you propose?

  • A. Use customer-managed encryption keys.
  • B. Use Google's Identity and Access Management (IAM) service to manage access controls on Google Cloud.
  • C. Enable Admin activity logs to monitor access to resources.
  • D. Enable Access Transparency logs with Access Approval requests for Google employees.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Sammydp202020
Highly Voted 1 year, 2 months ago
Selected Answer: D
D https://cloud.google.com/access-transparency https://cloud.google.com/cloud-provider-access-management/access-transparency/docs/overview
upvoted 5 times
...
zellck
Highly Voted 1 year, 6 months ago
Selected Answer: D
D is the answer
upvoted 5 times
...
Xoxoo
Most Recent 6 months, 3 weeks ago
Selected Answer: D
To address your organization’s concerns about the security of highly sensitive data stored on Google Cloud, you can propose the following solution: D. Enable Access Transparency logs with Access Approval requests for Google employees. This solution provides an additional layer of control and visibility over your cloud provider by enabling you to monitor and audit the actions taken by Google personnel when accessing your content. Access Transparency logs capture the actions performed by Google Cloud administrators, allowing you to maintain an audit trail and verify cloud provider access. Access Approval requests allow you to approve or dismiss requests for access by Google employees working to support your service. By combining these features, you can gain greater oversight and control over your sensitive data on Google Cloud. Please note that this is a high-level recommendation, and it is important to evaluate your specific requirements and consult the official Google Cloud documentation for detailed implementation guidance.
upvoted 3 times
...
passex
1 year, 3 months ago
Answer D - but, for "highly sensitive data" CMEK seems to be reasonable option but much easiest way is to use Transparency Logs
upvoted 1 times
...
PATILDXB
1 year, 3 months ago
B is the correct answer. IAM Privileges provide fine grain controls based on the users function
upvoted 1 times
...
Littleivy
1 year, 5 months ago
Selected Answer: A
Use customer-managed key to encrypt data by yourself
upvoted 2 times
Littleivy
1 year, 4 months ago
D should be the answer on second thought
upvoted 2 times
...
...
AzureDP900
1 year, 5 months ago
D is correct
upvoted 3 times
...
jitu028
1 year, 6 months ago
Answer is D https://cloud.google.com/access-transparency Access approval Explicitly approve access to your data or configurations on Google Cloud. Access Approval requests, when combined with Access Transparency logs, can be used to audit an end-to-end chain from support ticket to access request to approval, to eventual access.
upvoted 4 times
...

Question 105

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 105 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 105
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You want to use the gcloud command-line tool to authenticate using a third-party single sign-on (SSO) SAML identity provider. Which options are necessary to ensure that authentication is supported by the third-party identity provider (IdP)? (Choose two.)

  • A. SSO SAML as a third-party IdP
  • B. Identity Platform
  • C. OpenID Connect
  • D. Identity-Aware Proxy
  • E. Cloud Identity
Show Suggested Answer Hide Answer
Suggested Answer: AE 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
ExamQnA
Highly Voted 2 years, 10 months ago
Selected Answer: AE
Third-party identity providers If you have a third-party IdP, you can still configure SSO for third-party apps in the Cloud Identity catalog. User authentication occurs in the third-party IdP, and Cloud Identity manages the cloud apps. To use Cloud Identity for SSO, your users need Cloud Identity accounts. They sign in through your third-party IdP or using a password on their Cloud Identity accounts. https://cloud.google.com/identity/solutions/enable-sso
upvoted 23 times
AzureDP900
2 years, 5 months ago
A, E is right
upvoted 5 times
...
...
piyush_1982
Highly Voted 2 years, 8 months ago
Selected Answer: AC
I think the correct answer is A and C. The questions asks about what is required with third-party IdP to authenticate the gcloud commands. So the gcloud command requests goes to GCP. Since GCP is integrated with Third-party IdP for authentication gcloud command needs to be authenticated with third-party IdP. This can be achieved if ThridPaty IdP supports SAML and OIDC protocols .
upvoted 16 times
...
YourFriendlyNeighborhoodSpider
Most Recent 3 weeks, 6 days ago
Selected Answer: AE
A. SSO SAML as a third-party IdP This option confirms that you are using SSO with SAML for authentication via the third-party identity provider, which is essential for enabling SSO capabilities through gcloud. E. Cloud Identity Cloud Identity is Google Cloud's identity-as-a-service offering, which enables organizations to manage users and their access to Google Cloud resources. It supports integration with third-party SAML IdPs, allowing authentication through SSO.
upvoted 1 times
...
Mr_MIXER007
7 months, 2 weeks ago
Selected Answer: AE
Selected Answer: AE
upvoted 1 times
...
3d9563b
8 months, 3 weeks ago
Selected Answer: AE
SSO SAML as a third-party IdP: This option ensures that the authentication mechanism used is SAML, which is required for third-party IdP integration. Cloud Identity: This provides the underlying infrastructure to integrate and manage identities with third-party SAML IdPs, enabling SSO authentication.
upvoted 1 times
...
dija123
1 year, 1 month ago
Selected Answer: CE
C. OpenID Connect E. Cloud Identity A. SSO SAML as a third-party IdP: While it accurately describes the desired authentication but It represents the outcome we want to achieve, not the solution itself.
upvoted 2 times
oezgan
1 year ago
Gemini says: While SAML is a common protocol for SSO, it's not directly usable by gcloud for authentication. So it cant be A.
upvoted 2 times
...
...
mjcts
1 year, 2 months ago
Selected Answer: AE
OpenID is a different SSO protocol. We need SAML.
upvoted 2 times
...
Andras2k
1 year, 3 months ago
Selected Answer: AE
It specifically requires the SAML protocol. OpenID is another SSO protocol.
upvoted 2 times
...
ymkk
1 year, 7 months ago
Selected Answer: AE
Options B, C, and D are not directly related to setting up authentication using a third-party SSO SAML identity provider. Identity Platform (option B) is a service for authentication and user management, OpenID Connect (option C) is another authentication protocol, and Identity-Aware Proxy (option D) is a service for managing access to Google Cloud resources but is not specifically related to SSO SAML authentication with a third-party IdP.
upvoted 2 times
...
pfilourenco
1 year, 8 months ago
Selected Answer: AE
AE is the correct
upvoted 2 times
...
[Removed]
1 year, 8 months ago
Selected Answer: AE
"A,E" The requirement is for an SSO - SAML solution with a third party IDP. A- This is correct because it provides the right type of 3rd party partners. B - Not sufficient because not any IDP will suffice. Must be able to support SAML and SSO. C- OIDC is an option by not critical or a hard requirement. The questions asks about what is "..necessary..". D- IAP is not related to authentication mechanism but rather authorization. This is not the use case for it. E- This is needed on the receiving end in GCP to collaborate with 3rd party IDP (that has SAML SSO) https://cloud.google.com/identity/solutions/enable-sso
upvoted 2 times
...
keymson
1 year, 11 months ago
OpenID Connect has to be there. so A and C
upvoted 1 times
testgcptestgcp
1 year, 10 months ago
Cloud Identity does not have to be there? Why?
upvoted 2 times
...
...
alleinallein
2 years ago
Selected Answer: AC
Open ID seems to be necessary
upvoted 3 times
...
bruh_1
2 years ago
A. SSO SAML as a third-party IdP: This option is necessary because it specifies that you want to use SAML-based SSO with a third-party IdP. C. OpenID Connect: This option is necessary to ensure that the third-party IdP supports OpenID Connect, which is a protocol for authentication and authorization. Therefore, the correct options are A and C.
upvoted 3 times
...
TNT87
2 years ago
Selected Answer: AC
https://cloud.google.com/certificate-authority-service/docs/tutorials/using-3pi-with-reflection#set-up-wip https://cloud.google.com/identity/solutions/enable-sso#solutions Nothing supports E to satisfy the requirements othe question
upvoted 2 times
...
Sammydp202020
2 years, 1 month ago
Selected Answer: AE
AE https://cloud.google.com/identity/solutions/enable-sso Third-party identity providers If you have a third-party IdP, you can still configure SSO for third-party apps in the Cloud Identity catalog. User authentication occurs in the third-party IdP, and Cloud Identity manages the cloud apps. To use Cloud Identity for SSO, your users need Cloud Identity accounts. They sign in through your third-party IdP or using a password on their Cloud Identity accounts.
upvoted 2 times
...
Littleivy
2 years, 4 months ago
Selected Answer: AC
answer is A and C.
upvoted 2 times
...

Question 106

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 106 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 106
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You work for a large organization where each business unit has thousands of users. You need to delegate management of access control permissions to each business unit. You have the following requirements:
✑ Each business unit manages access controls for their own projects.
✑ Each business unit manages access control permissions at scale.
✑ Business units cannot access other business units' projects.
✑ Users lose their access if they move to a different business unit or leave the company.
✑ Users and access control permissions are managed by the on-premises directory service.
What should you do? (Choose two.)

  • A. Use VPC Service Controls to create perimeters around each business unit's project.
  • B. Organize projects in folders, and assign permissions to Google groups at the folder level.
  • C. Group business units based on Organization Units (OUs) and manage permissions based on OUs
  • D. Create a project naming convention, and use Google's IAM Conditions to manage access based on the prefix of project names.
  • E. Use Google Cloud Directory Sync to synchronize users and group memberships in Cloud Identity.
Show Suggested Answer Hide Answer
Suggested Answer: BE 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
TheBuckler
Highly Voted 2 years ago
I will take B & E. Makes sense for the OUs to have their own folders and respective projects under their folders. This will make each OU independent from one another in terms of environments, and will not be able to communicate with one another unless shared VPC/VPC peering is utilized. And E is fairly obvious, as they want to manage their users from on-prem directory, hence GCDS.
upvoted 5 times
...
pedrojorge
Highly Voted 1 year, 8 months ago
Selected Answer: BE
B and E
upvoted 5 times
...
tia_gll
Most Recent 6 months, 2 weeks ago
Selected Answer: BE
Ans are : B & E
upvoted 1 times
...
pradoUA
1 year ago
Selected Answer: BE
B and E are correct
upvoted 2 times
...
Rightsaidfred
1 year, 11 months ago
Agreed…B & E
upvoted 3 times
...

Question 107

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 107 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 107
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization recently deployed a new application on Google Kubernetes Engine. You need to deploy a solution to protect the application. The solution has the following requirements:
✑ Scans must run at least once per week
✑ Must be able to detect cross-site scripting vulnerabilities
✑ Must be able to authenticate using Google accounts
Which solution should you use?

  • A. Google Cloud Armor
  • B. Web Security Scanner
  • C. Security Health Analytics
  • D. Container Threat Detection
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Tabayashi
Highly Voted 2 years, 5 months ago
Answer is (B). Web Security Scanner identifies security vulnerabilities in your App Engine, Google Kubernetes Engine (GKE), and Compute Engine web applications. https://cloud.google.com/security-command-center/docs/concepts-web-security-scanner-overview
upvoted 14 times
AzureDP900
1 year, 11 months ago
Yes, B is right
upvoted 1 times
...
...
Alain_Barout2023
Most Recent 11 months, 1 week ago
Answer is B. Web Security Scanner identifies vulnerabilities in web application running in App Engine, Google Kubernetes Engine (GKE), and Compute Engine. CloudArmor is a WAF solution.
upvoted 3 times
desertlotus1211
9 months, 1 week ago
Google Cloud Armor can prevent XSS attacks. It has preconfigured rules that can mitigate XSS, broken authentication, and SQL injection. Cloud Armor also has a custom rules language that includes multiple custom operations. Could be 'A' as well...
upvoted 1 times
...
...
AwesomeGCP
2 years ago
Selected Answer: B
B. Web Security Scanner
upvoted 2 times
...
zellck
2 years ago
Selected Answer: B
B is the answer.
upvoted 4 times
...

Question 108

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 108 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 108
Topic #: 1
[All Professional Cloud Security Engineer Questions]

An organization is moving applications to Google Cloud while maintaining a few mission-critical applications on-premises. The organization must transfer the data at a bandwidth of at least 50 Gbps. What should they use to ensure secure continued connectivity between sites?

  • A. Dedicated Interconnect
  • B. Cloud Router
  • C. Cloud VPN
  • D. Partner Interconnect
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
mouchu
Highly Voted 2 years, 11 months ago
Answer = A
upvoted 8 times
...
[Removed]
Highly Voted 1 year, 8 months ago
Selected Answer: A
"A" I think the keyword here is "at least" 50 Gbps. Partner interconnect seems to max go up to 50 Gbps but Dedicated Interconnect can guarantee that throughput https://cloud.google.com/network-connectivity/docs/interconnect/concepts/overview
upvoted 5 times
...
Zek
Most Recent 4 months, 1 week ago
Selected Answer: A
https://cloud.google.com/network-connectivity/docs/interconnect/concepts/overview For Partner Interconnect,, The maximum supported attachment size is 50 Gbps, but not all sizes might be available, depending on what's offered by your chosen partner in the selected location. ... And the questions says at least 50 Gbps (50 or more); which seems to be only obtainable with Dedicated Interconnect
upvoted 1 times
...
AzureDP900
2 years, 5 months ago
A is right
upvoted 1 times
...
AwesomeGCP
2 years, 6 months ago
Selected Answer: A
A. Dedicated Interconnect
upvoted 1 times
...
zellck
2 years, 6 months ago
Selected Answer: A
A is the answer.
upvoted 1 times
...
Arturo_Cloud
2 years, 7 months ago
I understand that not all Partner Interconnect connections support 50 Gbps, so I'm going with A) for guaranteed connectivity. https://cloud.google.com/network-connectivity/docs/interconnect/concepts/overview
upvoted 3 times
...

Question 109

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 109 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 109
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization has had a few recent DDoS attacks. You need to authenticate responses to domain name lookups. Which Google Cloud service should you use?

  • A. Cloud DNS with DNSSEC
  • B. Cloud NAT
  • C. HTTP(S) Load Balancing
  • D. Google Cloud Armor
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Tabayashi
Highly Voted 1 year, 11 months ago
Answer is (A). The Domain Name System Security Extensions (DNSSEC) is a feature of the Domain Name System (DNS) that authenticates responses to domain name lookups. It does not provide privacy protections for those lookups, but prevents attackers from manipulating or poisoning the responses to DNS requests. https://cloud.google.com/dns/docs/dnssec
upvoted 19 times
AzureDP900
1 year, 5 months ago
Agreed, A is right
upvoted 2 times
...
...
Xoxoo
Most Recent 6 months, 3 weeks ago
Selected Answer: A
To authenticate responses to domain name lookups and protect your organization from DDoS attacks, you can use Cloud DNS with DNSSEC. DNS Security Extensions (DNSSEC) is a feature of the Domain Name System (DNS) that authenticates responses to domain name lookups and prevents attackers from manipulating or poisoning the responses to DNS requests. Cloud DNS supports DNSSEC and automatically manages the creation and rotation of DNSSEC keys (DNSKEY records) and the signing of zone data with resource record digital signature (RRSIG) records. By enabling DNSSEC in Cloud DNS, you can protect your domains from spoofing and poisoning attacks. Keyword here is domain name lookup so it must be A.
upvoted 4 times
...
risc
1 year, 5 months ago
Selected Answer: A
A, as explained by Tabayashi
upvoted 2 times
...
AwesomeGCP
1 year, 6 months ago
Selected Answer: A
A. Cloud DNS with DNSSEC
upvoted 2 times
...
zellck
1 year, 6 months ago
Selected Answer: A
A is the answer.
upvoted 2 times
...

Question 110

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 110 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 110
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your Security team believes that a former employee of your company gained unauthorized access to Google Cloud resources some time in the past 2 months by using a service account key. You need to confirm the unauthorized access and determine the user activity. What should you do?

  • A. Use Security Health Analytics to determine user activity.
  • B. Use the Cloud Monitoring console to filter audit logs by user.
  • C. Use the Cloud Data Loss Prevention API to query logs in Cloud Storage.
  • D. Use the Logs Explorer to search for user activity.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Medofree
Highly Voted 1 year, 10 months ago
Selected Answer: D
D. We use audit logs by searching the Service Account and checking activities in the past 2 months. (the user identity will not be seen since he used the SA identity but we can make correlations based on ip address, working hour, etc. )
upvoted 14 times
AzureDP900
1 year, 5 months ago
D is right, I agree
upvoted 3 times
...
...
[Removed]
Highly Voted 8 months, 2 weeks ago
Selected Answer: D
"D" A- Health Analytics - Managed Vulnerability Assessment. Not related. B- DLP - Filtering/Masing Sensitive Data. Not Related C- Cloud Monitoring - Perf metrics (e.g. availability). Not related D- Log Explorer - Log analysis. Related. Great for investigations. References: https://cloud.google.com/monitoring https://cloud.google.com/docs/security/compromised-credentials#look_for_unauthorized_access_and_resources
upvoted 8 times
...
chickenstealers
Most Recent 1 year, 3 months ago
B is correct answer https://cloud.google.com/docs/security/compromised-credentials Monitor for anomalies in service account key usage using Cloud Monitoring.
upvoted 2 times
Sammydp202020
1 year, 2 months ago
Cloud monitoring/logging is a service enabler to capture the logs. Question asks -- How does one check for user activity: So, the response warranted is D - logs explorer. https://cloud.google.com/docs/security/compromised-credentials#look_for_unauthorized_access_and_resources
upvoted 1 times
gcpengineer
10 months, 4 weeks ago
2 months..is long time ti check data access logs
upvoted 1 times
...
...
...
zellck
1 year, 6 months ago
Selected Answer: D
D is the answer.
upvoted 1 times
...
mikesp
1 year, 10 months ago
Selected Answer: D
B is intended to mislead the public. Cloud Monitoring provides only metrics. To check user activity is necessary to go to Cloud Logging and search on Audit Logs.
upvoted 8 times
...
mT3
1 year, 10 months ago
Selected Answer: B
Correct. Answer is (B). Investigate the potentially unauthorized activity and restore the account. Ref.https://support.google.com/a/answer/2984349
upvoted 3 times
...

Question 111

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 111 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 111
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your company requires the security and network engineering teams to identify all network anomalies within and across VPCs, internal traffic from VMs to VMs, traffic between end locations on the internet and VMs, and traffic between VMs to Google Cloud services in production. Which method should you use?

  • A. Define an organization policy constraint.
  • B. Configure packet mirroring policies.
  • C. Enable VPC Flow Logs on the subnet.
  • D. Monitor and analyze Cloud Audit Logs.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Tabayashi
Highly Voted 2 years, 5 months ago
I think the answer is (C). VPC Flow Logs samples each VM's TCP, UDP, ICMP, ESP, and GRE flows. Both inbound and outbound flows are sampled. These flows can be between the VM and another VM, a host in your on-premises data center, a Google service, or a host on the internet. https://cloud.google.com/vpc/docs/flow-logs
upvoted 13 times
...
hybridpro
Highly Voted 2 years, 3 months ago
B should be the answer. For detecting network anomalies, you need to have payload and header data as well to be effective. Besides C is saying to enable VPC flow logs on a subnet which won't serve our purpose either.
upvoted 8 times
...
dija123
Most Recent 7 months, 1 week ago
Selected Answer: B
Backet mirroring policies allow you to mirror all traffic passing through a specific network interface or VPC route to a designated destination (e.g., another VM, a Cloud Storage bucket). This captured traffic can then be analyzed by security and network engineers using tools like Suricata or Security Command Center for advanced anomaly detection. This approach provides the necessary level of detail and flexibility for identifying anomalies across all the mentioned traffic types
upvoted 1 times
...
b6f53d8
9 months, 1 week ago
C is only for subnet, and we need control in many VPCs, so I prefer B
upvoted 1 times
...
[Removed]
9 months, 3 weeks ago
Selected Answer: C
C - we need more than just the VMs here.
upvoted 1 times
...
sebG35
10 months, 1 week ago
The answer is C. The needs is identify all network anomalies within and across VPCs, internal traffic from VMs to VMs ... B- Does not meet all needs. It is limited to the VM and don't cover the needs : across VPCs https://cloud.google.com/vpc/docs/packet-mirroring?hl=en C- Cover all needs https://cloud.google.com/vpc/docs/flow-logs?hl=en
upvoted 1 times
...
[Removed]
1 year, 2 months ago
Selected Answer: B
"B" When there's a need for broad and deep network analysis, only packet mirroring can achieve this. Here's the specific use case that matches the quest. https://cloud.google.com/vpc/docs/packet-mirroring#enterprise_security
upvoted 3 times
...
tifo16
1 year, 10 months ago
https://cloud.google.com/vpc/docs/packet-mirroring#enterprise_security Security and network engineering teams must ensure that they are catching all anomalies and threats that might indicate security breaches and intrusions. They mirror all traffic so that they can complete a comprehensive inspection of suspicious flows. Because attacks can span multiple packets, security teams must be able to get all packets for each flow.
upvoted 3 times
tifo16
1 year, 10 months ago
Should be B
upvoted 2 times
...
...
Rightsaidfred
1 year, 10 months ago
As it is a close tie and ambiguity between B&C, I would say it is C - VPC Flow Logs in this instance, as Question 121 is focusing more on Packet Mirroring with the IDS Use Case.
upvoted 2 times
[Removed]
1 year, 2 months ago
C is limited to subnet level which is not enough to address all the needs in the question.
upvoted 1 times
...
...
marmar11111
1 year, 11 months ago
Selected Answer: B
Should be B
upvoted 3 times
...
hcnh
1 year, 11 months ago
Selected Answer: C
C is the answer as B has the limitation against question The mirroring happens on the virtual machine (VM) instances, not on the network. Consequently, Packet Mirroring consumes additional bandwidth on the VMs.
upvoted 3 times
...
AwesomeGCP
2 years ago
Selected Answer: B
B. Configure packet mirroring policies.
upvoted 5 times
...
zellck
2 years ago
Selected Answer: B
B is the answer. https://cloud.google.com/vpc/docs/packet-mirroring#enterprise_security Security and network engineering teams must ensure that they are catching all anomalies and threats that might indicate security breaches and intrusions. They mirror all traffic so that they can complete a comprehensive inspection of suspicious flows.
upvoted 3 times
AzureDP900
1 year, 11 months ago
Agree with B
upvoted 2 times
...
...
GHOST1985
2 years ago
Selected Answer: B
100% Answer B: Anomalies means packet miroiring https://cloud.google.com/vpc/docs/packet-mirroring#enterprise_security "Packet Mirroring is useful when you need to monitor and analyze your security status. It exports all traffic, not only the traffic between sampling periods. For example, you can use security software that analyzes mirrored traffic to detect all threats or anomalies. Additionally, you can inspect the full traffic flow to detect application performance issues. For more information, see the example use cases." https://cloud.google.com/vpc/docs/packet-mirroring
upvoted 2 times
...
tangac
2 years, 1 month ago
Selected Answer: C
First you can use VPC flow log at a subnet level : https://cloud.google.com/vpc/docs/using-flow-logs Then VPC Flow Log main feature is to collect logs that can be used for network monitoring, forensics, real-time security analysis, and expense optimization.
upvoted 1 times
...
jvkubjg
2 years, 1 month ago
Selected Answer: B
Anomalies -> Packet Mirroring
upvoted 1 times
...
mikesp
2 years, 4 months ago
Selected Answer: C
VPC Flow Logs also helps you perform network forensics when investigating suspicious behavior such as traffic from access from abnormal sources or unexpected volumes of data migration
upvoted 3 times
...

Question 112

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 112 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 112
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your company has been creating users manually in Cloud Identity to provide access to Google Cloud resources. Due to continued growth of the environment, you want to authorize the Google Cloud Directory Sync (GCDS) instance and integrate it with your on-premises LDAP server to onboard hundreds of users. You are required to:
✑ Replicate user and group lifecycle changes from the on-premises LDAP server in Cloud Identity.
✑ Disable any manually created users in Cloud Identity.
You have already configured the LDAP search attributes to include the users and security groups in scope for Google Cloud. What should you do next to complete this solution?

  • A. 1. Configure the option to suspend domain users not found in LDAP. 2. Set up a recurring GCDS task.
  • B. 1. Configure the option to delete domain users not found in LDAP. 2. Run GCDS after user and group lifecycle changes.
  • C. 1. Configure the LDAP search attributes to exclude manually created Cloud Identity users not found in LDAP. 2. Set up a recurring GCDS task.
  • D. 1. Configure the LDAP search attributes to exclude manually created Cloud Identity users not found in LDAP. 2. Run GCDS after user and group lifecycle changes.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
mT3
Highly Voted 1 year, 10 months ago
Selected Answer: A
Answer is (A). To achieve the requirement "Disable any manually created users in Cloud Identity", configure GCDS to suspend rather than delete accounts if user accounts are not found in the LDAP directory in GCDS. Ref: https://support.google.com/a/answer/7177267
upvoted 15 times
AzureDP900
1 year, 5 months ago
A is right
upvoted 1 times
alleinallein
1 year ago
Why not C?
upvoted 1 times
...
...
...
GCBC
Most Recent 7 months, 2 weeks ago
Selected Answer: A
Ref: https://support.google.com/a/answer/7177267
upvoted 1 times
...
[Removed]
8 months, 2 weeks ago
Selected Answer: A
"A" https://cloud.google.com/architecture/identity/federating-gcp-with-active-directory-synchronizing-user-accounts#deletion_policy
upvoted 2 times
...
AwesomeGCP
1 year, 5 months ago
Selected Answer: A
A. 1. Configure the option to suspend domain users not found in LDAP. 2. Set up a recurring GCDS task.
upvoted 2 times
...
tangac
1 year, 7 months ago
Selected Answer: A
clearly A
upvoted 2 times
...
KillerGoogle
1 year, 11 months ago
C. 1. Configure the LDAP search attributes to exclude manually created Cloud Identity users not found in LDAP. 2. Set up a recurring GCDS task.
upvoted 3 times
...
Tabayashi
1 year, 11 months ago
I think the answer is (A). When using Shared VPC, a service perimeter that includes projects that belong to a Shared VPC network must also include the project that hosts the network. When projects that belong to a Shared VPC network are not in the same perimeter as the host project, services might not work as expected or might be blocked entirely. Ensure that the Shared VPC network host is in the same service perimeter as the projects connected to the network. https://cloud.google.com/vpc-service-controls/docs/troubleshooting#shared_vpc
upvoted 3 times
Tabayashi
1 year, 11 months ago
Sorry, this answer is question 113.
upvoted 2 times
...
...

Question 113

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 113 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 113
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are troubleshooting access denied errors between Compute Engine instances connected to a Shared VPC and BigQuery datasets. The datasets reside in a project protected by a VPC Service Controls perimeter. What should you do?

  • A. Add the host project containing the Shared VPC to the service perimeter.
  • B. Add the service project where the Compute Engine instances reside to the service perimeter.
  • C. Create a service perimeter between the service project where the Compute Engine instances reside and the host project that contains the Shared VPC.
  • D. Create a perimeter bridge between the service project where the Compute Engine instances reside and the perimeter that contains the protected BigQuery datasets.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
risc
Highly Voted 2 years, 5 months ago
Selected Answer: A
(A) For VMs inside shared VPC, the host project needs to be added to the perimeter as well. I had real-life experience with this. However, this creates new security issues as all other VMs in other projects which are attached to shared subnets in the same host project then are also able to access the perimeter. Google recommends setting up Private Service Connect Endpoints to achieve subnet segregation for VPC-SC usage with Host projects.
upvoted 13 times
...
BPzen
Most Recent 4 months, 1 week ago
Selected Answer: D
Why D. Create a perimeter bridge is Correct: Problem Analysis: The BigQuery datasets reside within a service perimeter. The Compute Engine instances are in a service project connected to a Shared VPC, and they are outside the BigQuery perimeter. Access is being denied because the Compute Engine instances are not within the same service perimeter as the BigQuery datasets. Solution: A perimeter bridge allows resources in the service project (where Compute Engine instances reside) to securely communicate with resources in the service perimeter (where the BigQuery datasets reside). This ensures compliance with VPC Service Controls while allowing the required access.
upvoted 2 times
...
SQLbox
6 months, 4 weeks ago
VPC Service Controls are designed to protect Google Cloud resources (such as BigQuery) from unauthorized access by restricting access to those resources based on service perimeters. • In this scenario, the Compute Engine instances are trying to access BigQuery datasets, which are within a VPC Service Controls perimeter. • Compute Engine instances are in a service project, and to allow them to access resources (BigQuery) within the service perimeter, that service project must be added to the service perimeter.
upvoted 1 times
...
winston9
1 year, 1 month ago
Selected Answer: A
It's A check this: https://cloud.google.com/compute/docs/instances/protecting-resources-vpc-service-controls#shared-vpc-with-vpc-service-controls
upvoted 1 times
...
b6f53d8
1 year, 2 months ago
Why not D ? In my opinion, we need A and B to resolve issue, so why not D ?
upvoted 1 times
...
desertlotus1211
1 year, 7 months ago
Answer A: Select the projects that you want to secure within the perimeter. Click Projects. In the Add Projects window, select the projects you want to add. If you are using Shared VPC, make sure to add the host project and service projects. https://cloud.google.com/run/docs/securing/using-vpc-service-controls
upvoted 1 times
...
bruh_1
2 years ago
B. Add the service project where the Compute Engine instances reside to the service perimeter. Explanation: The VPC Service Controls perimeter restricts data access to a set of resources within a VPC network. To allow Compute Engine instances in the service project to access BigQuery datasets in the protected project, the service project needs to be added to the service perimeter.
upvoted 3 times
gcpengineer
1 year, 10 months ago
but the instance will communicate via the host project from the shared subnet
upvoted 2 times
...
...
Ric350
2 years ago
It's A and here's why. The questions establishes there's already VPC Service Control Perimeter and a shared VPC. Since the dataset resides in a project protected by a VPC SC perimeter, you wouldn't create a NEW service perimeter. Further, since we know per the question there's a SHARED VPC established & you're TROUBLESHOOTING, per the doc below, it makes sense that they're both not in the same VPC SC perimeter and why access is failing. https://cloud.google.com/vpc-service-controls/docs/troubleshooting#shared_vpc The questions isn't clear where the compute engine instance or dataset live in respect to the VPC SC perimeter. But it's clear, they are both NOT in the same VPC SC perimeter and the question states the BQ dataset is already protected. So B, C and D are wrong and only A ensure BOTH are in the same VPC SC perimeter regardless of which ones live in the host or service project.
upvoted 2 times
...
Littleivy
2 years, 5 months ago
Selected Answer: A
As the scenario is for troubleshooting, I'll choose A as answer since it's more likely people would forget to include host project to the service perimeter
upvoted 2 times
...
AzureDP900
2 years, 5 months ago
A. Add the host project containing the Shared VPC to the service perimeter. Looks good to me based on requirements
upvoted 2 times
...
soltium
2 years, 6 months ago
Selected Answer: B
Weird question, you need A n B. I'll choose B.
upvoted 3 times
...
AwesomeGCP
2 years, 6 months ago
Selected Answer: A
A. Add the host project containing the Shared VPC to the service perimeter.
upvoted 1 times
...
zellck
2 years, 6 months ago
Selected Answer: A
A is the answer. https://cloud.google.com/vpc-service-controls/docs/service-perimeters#secure-google-managed-resources If you're using Shared VPC, you must include the host project in a service perimeter along with any projects that belong to the Shared VPC.
upvoted 3 times
...
GHOST1985
2 years, 6 months ago
Selected Answer: A
"If you're using Shared VPC, you must include the host project in a service perimeter along with any projects that belong to the Shared VPC" => https://cloud.google.com/vpc-service-controls/docs/service-perimeters
upvoted 1 times
...
Chute5118
2 years, 8 months ago
Selected Answer: B
"If you're using Shared VPC, you must include the host project in a service perimeter along with any projects that belong to the Shared VPC." https://cloud.google.com/vpc-service-controls/docs/service-perimeters B
upvoted 2 times
GHOST1985
2 years, 6 months ago
i think you mean Answer A :)
upvoted 1 times
...
...
Aiffone
2 years, 9 months ago
i think the Answer should be C (a combination of A and B)
upvoted 1 times
...
mikesp
2 years, 10 months ago
Selected Answer: B
Change my answer.
upvoted 2 times
...

Question 114

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 114 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 114
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You recently joined the networking team supporting your company's Google Cloud implementation. You are tasked with familiarizing yourself with the firewall rules configuration and providing recommendations based on your networking and Google Cloud experience. What product should you recommend to detect firewall rules that are overlapped by attributes from other firewall rules with higher or equal priority?

  • A. Security Command Center
  • B. Firewall Rules Logging
  • C. VPC Flow Logs
  • D. Firewall Insights
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
ExamQnA
Highly Voted 1 year, 10 months ago
Selected Answer: D
Firewall Insights analyzes your firewall rules to detect firewall rules that are shadowed by other rules. A shadowed rule is a firewall rule that has all of its relevant attributes, such as its IP address and port ranges, overlapped by attributes from one or more rules with higher or equal priority, called shadowing rules. https://cloud.google.com/network-intelligence-center/docs/firewall-insights/concepts/overview
upvoted 6 times
...
zellck
Highly Voted 1 year, 6 months ago
Selected Answer: D
D is the answer. https://cloud.google.com/network-intelligence-center/docs/firewall-insights/concepts/overview#shadowed-firewall-rules Firewall Insights analyzes your firewall rules to detect firewall rules that are shadowed by other rules. A shadowed rule is a firewall rule that has all of its relevant attributes, such as its IP address and port ranges, overlapped by attributes from one or more rules with higher or equal priority, called shadowing rules.
upvoted 6 times
AzureDP900
1 year, 5 months ago
Agreed
upvoted 1 times
...
...
Xoxoo
Most Recent 6 months, 3 weeks ago
Selected Answer: D
To detect firewall rules that are overlapped by attributes from other firewall rules with higher or equal priority, you can use Firewall Insights. Firewall Insights is a feature of Google Cloud that provides visibility to firewall rule usage metrics and automatic analysis on firewall rule misconfigurations. It allows you to improve your security posture by detecting overly permissive firewall rules, unused firewall rules, and overlapping firewall rules. With Firewall Insights, you can automatically detect rules that can’t be reached during firewall rule evaluation due to overlapping rules with higher priorities. You can also detect unnecessary allow rules, open ports, and IP ranges and remove them to tighten the security boundary.
upvoted 3 times
...
GCBC
7 months, 2 weeks ago
definitely D - https://cloud.google.com/network-intelligence-center/docs/firewall-insights/concepts/overview
upvoted 2 times
...
AzureDP900
1 year, 5 months ago
D. Firewall Insights
upvoted 2 times
...
AwesomeGCP
1 year, 6 months ago
Selected Answer: D
D. Firewall Insights
upvoted 2 times
...
mikesp
1 year, 10 months ago
Selected Answer: D
Answer = D.
upvoted 1 times
...
mouchu
1 year, 10 months ago
Answer = D Firewall Insights analyzes your firewall rules to detect firewall rules that are shadowed by other rules.
upvoted 2 times
...

Question 115

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 115 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 115
Topic #: 1
[All Professional Cloud Security Engineer Questions]

The security operations team needs access to the security-related logs for all projects in their organization. They have the following requirements:
✑ Follow the least privilege model by having only view access to logs.
✑ Have access to Admin Activity logs.
✑ Have access to Data Access logs.
✑ Have access to Access Transparency logs.
Which Identity and Access Management (IAM) role should the security operations team be granted?

  • A. roles/logging.privateLogViewer
  • B. roles/logging.admin
  • C. roles/viewer
  • D. roles/logging.viewer
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
mouchu
Highly Voted 2 years, 4 months ago
Answer = A roles/logging.privateLogViewer (Private Logs Viewer) includes all the permissions contained by roles/logging.viewer, plus the ability to read Data Access audit logs in the _Default bucket.
upvoted 18 times
mT3
2 years, 4 months ago
Ref: https://cloud.google.com/logging/docs/access-control
upvoted 5 times
...
...
Littleivy
Highly Voted 1 year, 11 months ago
Selected Answer: A
You need roles/logging.privateLogViewer to view data access log and Access Transparency logs https://cloud.google.com/cloud-provider-access-management/access-transparency/docs/reading-logs#viewing-logs https://developers.google.com/cloud-search/docs/guides/audit-logging-manual#audit_log_permissions
upvoted 5 times
...
KLei
Most Recent 3 months, 2 weeks ago
Selected Answer: A
For access to all logs in the _Required bucket, and access to the _Default view on the _Default bucket, grant the Logs Viewer (roles/logging.viewer) role. For access to all logs in the _Required and _Default buckets, including data access logs, grant the Private Logs Viewer (roles/logging.privateLogViewer) role.
upvoted 2 times
...
[Removed]
9 months, 3 weeks ago
Selected Answer: A
A. since we need the data access logs on top of the others, only private log viewer provides this access/
upvoted 3 times
...
ale183
1 year ago
Answer= A To view all logs in the _Required bucket, and to view logs in the _Default view on the _Default bucket, you must have the Logs Viewer (roles/logging.viewer) role. To view all logs in the _Required and _Default buckets, including data access logs, you must have the Private Logs Viewer (roles/logging.privateLogViewer) role.
upvoted 2 times
...
blacortik
1 year, 1 month ago
Selected Answer: D
D. roles/logging.viewer The security operations team should be granted the roles/logging.viewer IAM role. This role provides the necessary permissions to view logs within the organization's projects, and it aligns with the least privilege principle as it grants only view access to logs.
upvoted 2 times
...
gcpengineer
1 year, 4 months ago
Selected Answer: A
A is the ans
upvoted 1 times
...
bruh_1
1 year, 6 months ago
D is the answer: The security operations team needs to have access to specific logs across all projects in their organization while following the least privilege model. The appropriate IAM role to grant them would be roles/logging.viewer. This role provides read-only access to all logs in the project, including Admin Activity logs, Data Access logs, and Access Transparency logs. It does not provide access to any other resources in the project, such as compute instances or storage buckets. This ensures that the security operations team can only view the logs and cannot make any modifications to the resources.
upvoted 1 times
...
AzureDP900
1 year, 11 months ago
A is the answer.
upvoted 1 times
...
AwesomeGCP
2 years ago
Selected Answer: A
A. roles/logging.privateLogViewer
upvoted 1 times
...
zellck
2 years ago
A is the answer. https://cloud.google.com/logging/docs/access-control#considerations roles/logging.privateLogViewer (Private Logs Viewer) includes all the permissions contained by roles/logging.viewer, plus the ability to read Data Access audit logs in the _Default bucket.
upvoted 2 times
...
cloudprincipal
2 years, 4 months ago
Selected Answer: A
roles/logging.privateLogViewer (Private Logs Viewer) includes all the permissions contained by roles/logging.viewer, plus the ability to read Data Access audit logs in the _Default bucket. https://cloud.google.com/logging/docs/access-control
upvoted 3 times
...
Nicky1402
2 years, 5 months ago
I think the correct answer is A. logging.admin is too broad a permission. We need to give "only view access to logs". And we need to: ✑ Have access to Admin Activity logs. ✑ Have access to Data Access logs. ✑ Have access to Access Transparency logs. Only the roles/logging.privateLogViewer role has all these permissions. Private Logs Viewer (roles/logging.privateLogViewer) Provides permissions of the Logs Viewer role and in addition, provides read-only access to log entries in private logs. Lowest-level resources where you can grant this role: Project After you've configured Access Transparency for your Google Cloud organization, you can set controls for who can access the Access Transparency logs by assigning a user or group the Private Logs Viewer role. Links for reference: https://cloud.google.com/logging/docs/access-control https://cloud.google.com/cloud-provider-access-management/access-transparency/docs/reading-logs?hl=en
upvoted 4 times
...

Question 116

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 116 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 116
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are exporting application logs to Cloud Storage. You encounter an error message that the log sinks don't support uniform bucket-level access policies. How should you resolve this error?

  • A. Change the access control model for the bucket
  • B. Update your sink with the correct bucket destination.
  • C. Add the roles/logging.logWriter Identity and Access Management (IAM) role to the bucket for the log sink identity.
  • D. Add the roles/logging.bucketWriter Identity and Access Management (IAM) role to the bucket for the log sink identity.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
mikesp
Highly Voted 1 year, 10 months ago
Selected Answer: A
https://cloud.google.com/logging/docs/export/troubleshoot Unable to grant correct permissions to the destination: Even if the sink was successfully created with the correct service account permissions, this error message displays if the access control model for the Cloud Storage bucket was set to uniform access when the bucket was created. For existing Cloud Storage buckets, you can change the access control model for the first 90 days after bucket creation by using the Permissions tab. For new buckets, select the Fine-grained access control model during bucket creation. For details, see Creating Cloud Storage buckets.
upvoted 11 times
...
ArizonaClassics
Highly Voted 6 months ago
Uniform Bucket-Level Access (UBLA) is a feature in Google Cloud Storage that allows you to use Identity and Access Management (IAM) to manage access to a bucket's content. When it is enabled, Access Control Lists (ACLs) cannot be used. If you're encountering an error message indicating that the log sinks don't support uniform bucket-level access policies, it's possible that your bucket is using UBLA and the logging mechanism doesn’t support it. A. Change the access control model for the bucket appears to be the most relevant choice to address the error related to UBLA support. By reverting from UBLA to the fine-grained access control model, you might resolve the issue if the log sinks indeed do not support UBLA. Always ensure to validate changes and ensure that they comply with your organization’s security policies
upvoted 5 times
...
Xoxoo
Most Recent 6 months, 3 weeks ago
Selected Answer: A
To resolve the error message that the log sinks don’t support uniform bucket-level access policies when exporting application logs to Cloud Storage, you should change the access control model for the bucket. This will allow you to enable uniform bucket-level access, which is required for log sinks to function properly. By changing the access control model for the bucket, you can ensure that the necessary permissions are granted and that the log sinks can support uniform bucket-level access policies.
upvoted 3 times
...
AzureDP900
1 year, 5 months ago
A is right
upvoted 1 times
...
zellck
1 year, 6 months ago
Selected Answer: A
A is the answer. https://cloud.google.com/logging/docs/export/troubleshoot#errors_exporting_to_cloud_storage - Unable to grant correct permissions to the destination: Even if the sink was successfully created with the correct service account permissions, this error message displays if the access control model for the Cloud Storage bucket was set to uniform access when the bucket was created.
upvoted 4 times
...
mT3
1 year, 10 months ago
Selected Answer: A
Answer is (A). If bucket-level access policies are not supported, Fine-grained is being used. The recommended architecture is Uniform bucket-level access. Therefore, Change the access control model for the bucket. Ref : https://cloud.google.com/storage/docs/access-control
upvoted 3 times
...
Taliesyn
1 year, 11 months ago
Selected Answer: A
A: can't export logs to a bucket with uniform bucket-level access (B sounds halfway decent as well, but you'd still need another bucket without uniform bucket-level access, so it's incomplete)
upvoted 1 times
...

Question 117

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 117 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 117
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You plan to deploy your cloud infrastructure using a CI/CD cluster hosted on Compute Engine. You want to minimize the risk of its credentials being stolen by a third party. What should you do?

  • A. Create a dedicated Cloud Identity user account for the cluster. Use a strong self-hosted vault solution to store the user's temporary credentials.
  • B. Create a dedicated Cloud Identity user account for the cluster. Enable the constraints/iam.disableServiceAccountCreation organization policy at the project level.
  • C. Create a custom service account for the cluster. Enable the constraints/iam.disableServiceAccountKeyCreation organization policy at the project level
  • D. Create a custom service account for the cluster. Enable the constraints/iam.allowServiceAccountCredentialLifetimeExtension organization policy at the project level.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
ExamQnA
Highly Voted 2 years, 10 months ago
Selected Answer: C
Disable service account key creation You can use the iam.disableServiceAccountKeyCreation boolean constraint to disable the creation of new external service account keys. This allows you to control the use of unmanaged long-term credentials for service accounts. When this constraint is set, user-managed credentials cannot be created for service accounts in projects affected by the constraint. https://cloud.google.com/resource-manager/docs/organization-policy/restricting-service-accounts#example_policy_boolean_constraint
upvoted 7 times
AzureDP900
2 years, 5 months ago
Yes C. Create a custom service account for the cluster. Enable the constraints/iam.disableServiceAccountKeyCreation organization policy at the project level
upvoted 1 times
...
...
Zek
Most Recent 4 months, 1 week ago
Selected Answer: C
C. Create a custom service account for the cluster. Enable the constraints/iam.disableServiceAccountKeyCreation organization policy at the project level
upvoted 1 times
...
Xoxoo
1 year, 6 months ago
Selected Answer: C
To minimize the risk of credentials being stolen by a third party when deploying your cloud infrastructure using a CI/CD cluster hosted on Compute Engine, you should create a custom service account for the cluster and enable the constraints/iam.disableServiceAccountKeyCreation organization policy at the project level. By creating a custom service account for the cluster, you can have more control over the permissions and access granted to the cluster. This allows you to follow the principle of least privilege and ensure that only the necessary permissions are assigned to the service account. Enabling the constraints/iam.disableServiceAccountKeyCreation organization policy at the project level helps prevent unauthorized access to the service account’s credentials by disabling the creation of new service account keys.
upvoted 1 times
...
[Removed]
1 year, 8 months ago
Selected Answer: C
"C" Service Account Keys get exported outside GCP to local machines and this is where the main risk comes from. Therefore you can mitigate this risk by disabling the creation of service account keys. https://cloud.google.com/resource-manager/docs/organization-policy/restricting-service-accounts#disable_service_account_key_creation
upvoted 2 times
...
mikesp
2 years, 10 months ago
Selected Answer: C
Also think it is C
upvoted 4 times
...
mT3
2 years, 10 months ago
Selected Answer: C
Answer is (C). To minimize the risk of credentials being stolen by third parties, it is desirable to control the use of unmanaged long-term credentials. ・"constraints/iam.allowServiceAccountCredentialLifetimeExtension": to extend the lifetime of the access token. ・"iam.disableServiceAccountCreation": Disables service account creation. ・"iam.disableServiceAccountCreation": Controls the use of unmanaged long-term credentials for service accounts. Ref : https://cloud.google.com/resource-manager/docs/organization-policy/restricting-service-accounts#example_policy_boolean_constraint
upvoted 2 times
...

Question 118

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 118 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 118
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You need to set up two network segments: one with an untrusted subnet and the other with a trusted subnet. You want to configure a virtual appliance such as a next-generation firewall (NGFW) to inspect all traffic between the two network segments. How should you design the network to inspect the traffic?

  • A. 1. Set up one VPC with two subnets: one trusted and the other untrusted. 2. Configure a custom route for all traffic (0.0.0.0/0) pointed to the virtual appliance.
  • B. 1. Set up one VPC with two subnets: one trusted and the other untrusted. 2. Configure a custom route for all RFC1918 subnets pointed to the virtual appliance.
  • C. 1. Set up two VPC networks: one trusted and the other untrusted, and peer them together. 2. Configure a custom route on each network pointed to the virtual appliance.
  • D. 1. Set up two VPC networks: one trusted and the other untrusted. 2. Configure a virtual appliance using multiple network interfaces, with each interface connected to one of the VPC networks.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
mouchu
Highly Voted 1 year, 10 months ago
Answer = D Multiple network interfaces. The simplest way to connect multiple VPC networks through a virtual appliance is by using multiple network interfaces, with each interface connecting to one of the VPC networks. Internet and on-premises connectivity is provided over one or two separate network interfaces. With many NGFW products, internet connectivity is connected through an interface marked as untrusted in the NGFW software.
upvoted 11 times
mT3
1 year, 10 months ago
Agreed. Ref: For Cisco Firepower Threat Defense Virtual: https://www.cisco.com/c/en/us/td/docs/security/firepower/quick_start/gcp/ftdv-gcp-gsg/ftdv-gcp-intro.html
upvoted 2 times
AzureDP900
1 year, 5 months ago
Agree D. 1. Set up two VPC networks: one trusted and the other untrusted. 2. Configure a virtual appliance using multiple network interfaces, with each interface connected to one of the VPC networks.
upvoted 2 times
...
...
...
mikesp
Highly Voted 1 year, 10 months ago
Selected Answer: D
https://cloud.google.com/architecture/best-practices-vpc-design This architecture has multiple VPC networks that are bridged by an L7 next-generation firewall (NGFW) appliance, which functions as a multi-NIC bridge between VPC networks.
upvoted 5 times
...
rsamant
Most Recent 4 months, 1 week ago
A, we need to define routing to divert all traffic through the network appliance https://cloud.google.com/architecture/architecture-centralized-network-appliances-on-google-cloud
upvoted 1 times
rsamant
4 months, 1 week ago
no, B is the correct answer Use routing. In this approach, Google Cloud routes direct the traffic to the virtual appliances from the connected VPC networks
upvoted 1 times
...
...
desertlotus1211
7 months, 1 week ago
I'm not sure id Answer D is the 'most' correct answer.... The subnet already exists... it didn't ask for a redesign.
upvoted 2 times
desertlotus1211
7 months, 1 week ago
After reading again - the question is in fact asking to design the A network with those subnets... Answer D is correct. Sorry about that
upvoted 2 times
...
...
blacortik
7 months, 2 weeks ago
Selected Answer: D
D, specifically addresses the design of using two VPC networks and connecting a virtual appliance (NGFW) with multiple interfaces, each connected to a different VPC network. This design allows the appliance to inspect and control the traffic between the trusted and untrusted segments effectively.
upvoted 2 times
...
zellck
1 year, 6 months ago
Selected Answer: D
D is the answer. https://cloud.google.com/architecture/best-practices-vpc-design#l7 This architecture has multiple VPC networks that are bridged by an L7 next-generation firewall (NGFW) appliance, which functions as a multi-NIC bridge between VPC networks. An untrusted, outside VPC network is introduced to terminate hybrid interconnects and internet-based connections that terminate on the outside leg of the L7 NGFW for inspection. There are many variations on this design, but the key principle is to filter traffic through the firewall before the traffic reaches trusted VPC networks.
upvoted 4 times
...
badrik
1 year, 9 months ago
Selected Answer: B
B , 100% !
upvoted 1 times
...

Question 119

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 119 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 119
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are a member of your company's security team. You have been asked to reduce your Linux bastion host external attack surface by removing all public IP addresses. Site Reliability Engineers (SREs) require access to the bastion host from public locations so they can access the internal VPC while off-site. How should you enable this access?

  • A. Implement Cloud VPN for the region where the bastion host lives.
  • B. Implement OS Login with 2-step verification for the bastion host.
  • C. Implement Identity-Aware Proxy TCP forwarding for the bastion host.
  • D. Implement Google Cloud Armor in front of the bastion host.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
mikesp
Highly Voted 1 year, 10 months ago
Selected Answer: C
The answer is clear in this case.
upvoted 6 times
...
Xoxoo
Most Recent 6 months, 3 weeks ago
Selected Answer: C
To enable access to the bastion host from public locations while reducing the Linux bastion host external attack surface by removing all public IP addresses, you should implement Identity-Aware Proxy TCP forwarding for the bastion host. This will allow Site Reliability Engineers (SREs) to access the internal VPC while off-site. Identity-Aware Proxy TCP forwarding allows you to securely access TCP-based applications such as SSH and RDP without exposing them to the internet. It provides a secure way to access your applications by verifying user identity and context of the request before granting access. By implementing Identity-Aware Proxy TCP forwarding for the bastion host, you can ensure that only authorized users can access the internal VPC while off-site, reducing the risk of unauthorized access and data breaches.
upvoted 3 times
...
bruh_1
1 year ago
C is correct
upvoted 1 times
...
AzureDP900
1 year, 5 months ago
C. Implement Identity-Aware Proxy TCP forwarding for the bastion host.
upvoted 2 times
...
mT3
1 year, 10 months ago
Selected Answer: C
Correct. Ref.https://cloud.google.com/architecture/building-internet-connectivity-for-private-vms#configuring_iap_tunnels_for_interacting_with_instances
upvoted 3 times
...

Question 120

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 120 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 120
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You need to enable VPC Service Controls and allow changes to perimeters in existing environments without preventing access to resources. Which VPC Service
Controls mode should you use?

  • A. Cloud Run
  • B. Native
  • C. Enforced
  • D. Dry run
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Tabayashi
Highly Voted 1 year, 11 months ago
Answer is (D). In dry run mode, requests that violate the perimeter policy are not denied, only logged. Dry run mode is used to test perimeter configuration and to monitor usage of services without preventing access to resources. https://cloud.google.com/vpc-service-controls/docs/dry-run-mode
upvoted 10 times
...
Xoxoo
Most Recent 6 months, 3 weeks ago
Selected Answer: D
Enforced mode is the default mode for service perimeters. When a service perimeter is enforced, requests that violate the perimeter policy, such as requests to restricted services from outside a perimeter, are denied. Dry run service perimeters are used to test perimeter configuration and to monitor usage of services without preventing access to resources. Answer : D
upvoted 3 times
...
[Removed]
8 months, 2 weeks ago
Selected Answer: D
"D" Only two modes for service perimeter (Enforced and Dry Run). So A and B are not applicable. C (enforced) is too strict and doesn't support the use case of still allowing access to resources. Therefore it's "D" (dry run). https://cloud.google.com/vpc-service-controls/docs/service-perimeters#about-perimeters
upvoted 3 times
...
bruh_1
1 year ago
D is correct
upvoted 1 times
...
AzureDP900
1 year, 5 months ago
D -- Dry run mode
upvoted 2 times
...
AwesomeGCP
1 year, 6 months ago
Selected Answer: D
D. Dry run
upvoted 1 times
...
zellck
1 year, 6 months ago
Selected Answer: D
D is the answer.
upvoted 2 times
...

Question 121

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 121 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 121
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You manage your organization's Security Operations Center (SOC). You currently monitor and detect network traffic anomalies in your Google Cloud VPCs based on packet header information. However, you want the capability to explore network flows and their payload to aid investigations. Which Google Cloud product should you use?

  • A. Marketplace IDS
  • B. VPC Flow Logs
  • C. VPC Service Controls logs
  • D. Packet Mirroring
  • E. Google Cloud Armor Deep Packet Inspection
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Tabayashi
Highly Voted 2 years, 5 months ago
Answer is (D). Packet Mirroring clones the traffic of specified instances in your Virtual Private Cloud (VPC) network and forwards it for examination. Packet Mirroring captures all traffic and packet data, including payloads and headers. https://cloud.google.com/vpc/docs/packet-mirroring
upvoted 9 times
...
dija123
Most Recent 7 months, 1 week ago
Selected Answer: D
Agree with D
upvoted 1 times
...
[Removed]
1 year, 2 months ago
Selected Answer: D
"D" Only packet mirroring allows deep packet (and payload) analysis. https://cloud.google.com/vpc/docs/packet-mirroring#enterprise_security
upvoted 3 times
...
AzureDP900
1 year, 11 months ago
Packet Mirroring D is right
upvoted 3 times
...
AwesomeGCP
2 years ago
Selected Answer: D
D. Packet Mirroring
upvoted 2 times
...
zellck
2 years ago
Selected Answer: D
D is the answer.
upvoted 2 times
...

Question 122

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 122 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 122
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization acquired a new workload. The Web and Application (App) servers will be running on Compute Engine in a newly created custom VPC. You are responsible for configuring a secure network communication solution that meets the following requirements:
✑ Only allows communication between the Web and App tiers.
✑ Enforces consistent network security when autoscaling the Web and App tiers.
✑ Prevents Compute Engine Instance Admins from altering network traffic.
What should you do?

  • A. 1. Configure all running Web and App servers with respective network tags. 2. Create an allow VPC firewall rule that specifies the target/source with respective network tags.
  • B. 1. Configure all running Web and App servers with respective service accounts. 2. Create an allow VPC firewall rule that specifies the target/source with respective service accounts.
  • C. 1. Re-deploy the Web and App servers with instance templates configured with respective network tags. 2. Create an allow VPC firewall rule that specifies the target/source with respective network tags.
  • D. 1. Re-deploy the Web and App servers with instance templates configured with respective service accounts. 2. Create an allow VPC firewall rule that specifies the target/source with respective service accounts.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
KillerGoogle
Highly Voted 2 years, 11 months ago
D https://cloud.google.com/vpc/docs/firewalls#service-accounts-vs-tags
upvoted 15 times
...
csrazdan
Highly Voted 2 years, 4 months ago
Selected Answer: D
The requirement can be fulfilled by both network tags and service accounts. To update both compute instances will have to be stopped. That means options A and B are out. Option C is out because Compute Engine Instance Admins can change network tags and avoid firewall rules. Deployment has to be done based on the instance template so that no configuration can be changed to divert the traffic.
upvoted 8 times
...
Sundar_Pichai
Most Recent 7 months, 2 weeks ago
Selected Answer: D
It's D because of it's use of auto-scaling. If autoscaling wasn't part of the question, then B would have been suitable. It can't be network level tags because admins can change those.
upvoted 1 times
...
Ric350
2 years ago
Can you create an instance template with a service account? How do you automate that and how does it name the service accounts for each new instance??
upvoted 1 times
TNT87
2 years ago
You can set up a new instance to run as a service account through the Google Cloud console, the Google Cloud CLI, or directly through the API. Go to the Create an instance page. Specify the VM details. In the Identity and API access section, choose the service account you want to use from the drop-down list. https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances
upvoted 2 times
...
...
AzureDP900
2 years, 5 months ago
D is right
upvoted 1 times
...
risc
2 years, 5 months ago
This depends on what is meant by "re-deploy"? Service accounts can also be changed by simply stopping the VM and starting it again once the SA was changed. Is this already a re-deploy?
upvoted 3 times
...
AwesomeGCP
2 years, 6 months ago
Selected Answer: D
D. 1. Re-deploy the Web and App servers with instance templates configured with respective service accounts. 2. Create an allow VPC firewall rule that specifies the target/source with respective service accounts.
upvoted 1 times
...
zellck
2 years, 6 months ago
Selected Answer: D
D is the answer. https://cloud.google.com/vpc/docs/firewalls#service-accounts-vs-tags A service account represents an identity associated with an instance. Only one service account can be associated with an instance. You control access to the service account by controlling the grant of the Service Account User role for other IAM principals. For an IAM principal to start an instance by using a service account, that principal must have the Service Account User role to at least use that service account and appropriate permissions to create instances (for example, having the Compute Engine Instance Admin role to the project).
upvoted 2 times
...
cloudprincipal
2 years, 10 months ago
Selected Answer: D
Agreed, it has to be D https://cloud.google.com/vpc/docs/firewalls#service-accounts-vs-tags
upvoted 2 times
...

Question 123

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 123 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 123
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You need to connect your organization's on-premises network with an existing Google Cloud environment that includes one Shared VPC with two subnets named
Production and Non-Production. You are required to:
✑ Use a private transport link.
✑ Configure access to Google Cloud APIs through private API endpoints originating from on-premises environments.
✑ Ensure that Google Cloud APIs are only consumed via VPC Service Controls.
What should you do?

  • A. 1. Set up a Cloud VPN link between the on-premises environment and Google Cloud. 2. Configure private access using the restricted.googleapis.com domains in on-premises DNS configurations.
  • B. 1. Set up a Partner Interconnect link between the on-premises environment and Google Cloud. 2. Configure private access using the private.googleapis.com domains in on-premises DNS configurations.
  • C. 1. Set up a Direct Peering link between the on-premises environment and Google Cloud. 2. Configure private access for both VPC subnets.
  • D. 1. Set up a Dedicated Interconnect link between the on-premises environment and Google Cloud. 2. Configure private access using the restricted.googleapis.com domains in on-premises DNS configurations.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
ExamQnA
Highly Voted 10 months, 3 weeks ago
Ans: D restricted.googleapis.com (199.36.153.4/30) only provides access to Cloud and Developer APIs that support VPC Service Controls. VPC Service Controls are enforced for these services https://cloud.google.com/vpc/docs/configure-private-google-access-hybrid
upvoted 13 times
...
AzureDP900
Most Recent 5 months, 1 week ago
D. 1. Set up a Dedicated Interconnect link between the on-premises environment and Google Cloud. 2. Configure private access using the restricted.googleapis.com domains in on-premises DNS configurations.
upvoted 3 times
...
sumundada
8 months, 3 weeks ago
Selected Answer: D
restricted.googleapis.com makes it clear choice
upvoted 4 times
...
cloudprincipal
10 months, 1 week ago
Selected Answer: D
Tough call between A and D. "✑ Use a private transport link" pushes me towards VPN connection, but the dedicated interconnect probably also fulfills that.
upvoted 2 times
Aiffone
9 months, 1 week ago
Not a tough call, VPN happens over the internet and isn't as private as dedicated interconnect...makes it a straight D
upvoted 9 times
...
...

Question 124

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 124 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 124
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are working with protected health information (PHI) for an electronic health record system. The privacy officer is concerned that sensitive data is stored in the analytics system. You are tasked with anonymizing the sensitive data in a way that is not reversible. Also, the anonymized data should not preserve the character set and length. Which Google Cloud solution should you use?

  • A. Cloud Data Loss Prevention with deterministic encryption using AES-SIV
  • B. Cloud Data Loss Prevention with format-preserving encryption
  • C. Cloud Data Loss Prevention with cryptographic hashing
  • D. Cloud Data Loss Prevention with Cloud Key Management Service wrapped cryptographic keys
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Tabayashi
Highly Voted 2 years, 5 months ago
Answer is (C). The only option that is irreversible is cryptographic hashing. https://cloud.google.com/dlp/docs/pseudonymization?hl=JA&skip_cache=true#supported-methods
upvoted 20 times
AzureDP900
1 year, 11 months ago
Agreed C is right
upvoted 1 times
...
...
oezgan
Most Recent 6 months, 3 weeks ago
Gemini says: Restricted Endpoints: While restricted.googleapis.com can be used for private access, it's recommended to use private.googleapis.com for newer services and broader compatibility.
upvoted 1 times
...
mackarel22
1 year, 4 months ago
Selected Answer: C
Hash is not reversible, thus C
upvoted 4 times
...
AwesomeGCP
2 years ago
Selected Answer: C
C. Cloud Data Loss Prevention with cryptographic hashing
upvoted 2 times
...
sumundada
2 years, 2 months ago
Selected Answer: C
https://cloud.google.com/dlp/docs/pseudonymization
upvoted 1 times
...
cloudprincipal
2 years, 4 months ago
Selected Answer: C
Tabayashi is correct. No format preserving and irrversible are the key requirements
upvoted 2 times
...

Question 125

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 125 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 125
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are setting up a CI/CD pipeline to deploy containerized applications to your production clusters on Google Kubernetes Engine (GKE). You need to prevent containers with known vulnerabilities from being deployed. You have the following requirements for your solution:

Must be cloud-native -

✑ Must be cost-efficient
✑ Minimize operational overhead
How should you accomplish this? (Choose two.)

  • A. Create a Cloud Build pipeline that will monitor changes to your container templates in a Cloud Source Repositories repository. Add a step to analyze Container Analysis results before allowing the build to continue.
  • B. Use a Cloud Function triggered by log events in Google Cloud's operations suite to automatically scan your container images in Container Registry.
  • C. Use a cron job on a Compute Engine instance to scan your existing repositories for known vulnerabilities and raise an alert if a non-compliant container image is found.
  • D. Deploy Jenkins on GKE and configure a CI/CD pipeline to deploy your containers to Container Registry. Add a step to validate your container images before deploying your container to the cluster.
  • E. In your CI/CD pipeline, add an attestation on your container image when no vulnerabilities have been found. Use a Binary Authorization policy to block deployments of containers with no attestation in your cluster.
Show Suggested Answer Hide Answer
Suggested Answer: AE 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
mikesp
Highly Voted 1 year, 10 months ago
Selected Answer: AE
On-demand container analysis can be integrated into a Cloud Build Pipeline: https://cloud.google.com/container-analysis/docs/ods-cloudbuild Also binary attestation is a complementary mechanism "cloud-native".
upvoted 9 times
[Removed]
8 months, 2 weeks ago
Side note - Container Analysis is now known as Artifact Analysis https://cloud.google.com/artifact-analysis/docs/artifact-analysis#ca-ods
upvoted 4 times
...
...
Xoxoo
Most Recent 6 months, 3 weeks ago
Selected Answer: AE
A. Create a Cloud Build pipeline that will monitor changes to your container templates in a Cloud Source Repositories repository. Add a step to analyze Container Analysis results before allowing the build to continue. This approach integrates vulnerability scanning into your CI/CD pipeline using native Google Cloud services. E. In your CI/CD pipeline, add an attestation on your container image when no vulnerabilities have been found. Use a Binary Authorization policy to block deployments of containers with no attestation in your cluster. This approach enforces security policies through Binary Authorization, ensuring only images with proper attestations (i.e., no known vulnerabilities) are deployed.
upvoted 2 times
...
zellck
1 year, 6 months ago
Selected Answer: AE
AE is the answer. https://cloud.google.com/container-analysis/docs/container-analysis Container Analysis is a service that provides vulnerability scanning and metadata storage for containers. The scanning service performs vulnerability scans on images in Container Registry and Artifact Registry, then stores the resulting metadata and makes it available for consumption through an API. https://cloud.google.com/binary-authorization/docs/attestations After a container image is built, an attestation can be created to affirm that a required activity was performed on the image such as a regression test, vulnerability scan, or other test. The attestation is created by signing the image's unique digest. During deployment, instead of repeating the activities, Binary Authorization verifies the attestations using an attestor. If all of the attestations for an image are verified, Binary Authorization allows the image to be deployed.
upvoted 4 times
AzureDP900
1 year, 5 months ago
Agreed
upvoted 1 times
...
...
szl0144
1 year, 10 months ago
AE is the answer, C has too much manual operations
upvoted 1 times
...
ExamQnA
1 year, 10 months ago
Ans: A,E https://cloud.google.com/architecture/binary-auth-with-cloud-build-and-gke#setting_the_binary_authorization_policy
upvoted 2 times
...

Question 126

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 126 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 126
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Which type of load balancer should you use to maintain client IP by default while using the standard network tier?

  • A. SSL Proxy
  • B. TCP Proxy
  • C. Internal TCP/UDP
  • D. TCP/UDP Network
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
[Removed]
Highly Voted 8 months, 2 weeks ago
Selected Answer: D
"D" Proxy LB's terminate traffic at the LB layer before forwarding to internal instances. Source client IP is not preserved. This excludes options "A" and "B". TCP/UDP Network LBs (both internal and external) are also known as Passthrough Network LBs and preserve the client IP. So both options "C" and "D" are correct in terms of preserving client IP, however only the external LB ("D") is available in standard tier. Internal Passthrough TCP/UDP Network LB (option "C") is only in Premium Tier. https://cloud.google.com/load-balancing/docs/load-balancing-overview#passthrough-network-lb
upvoted 6 times
...
mikesp
Highly Voted 1 year, 10 months ago
Selected Answer: D
Internal load balancer (C) is also a non-proxied load balancer but it is supported only in premium-tier networks. https://cloud.google.com/load-balancing/docs/load-balancing-overview
upvoted 5 times
...
desertlotus1211
Most Recent 7 months, 1 week ago
Answer is D: https://cloud.google.com/network-tiers/docs/overview#:~:text=Premium%20Tier%20enables%20global%20load,Standard%20Tier%20regional%20IP%20address. Order of elimination : TCP and SSL proxy is with Premium Tier. Can't be Internal TCP/UDP as Standard Tier is across the Internet. So D is correct
upvoted 3 times
...
AwesomeGCP
1 year, 6 months ago
Selected Answer: D
D. TCP/UDP Network
upvoted 4 times
...
zellck
1 year, 6 months ago
Selected Answer: D
D is the answer. https://cloud.google.com/load-balancing/docs/load-balancing-overview#choosing_a_load_balancer
upvoted 4 times
...
piyush_1982
1 year, 8 months ago
Selected Answer: D
Definitely D https://cloud.google.com/load-balancing/docs/load-balancing-overview#backend_region_and_network
upvoted 3 times
...
szl0144
1 year, 10 months ago
TCP Proxy Load Balancing terminates TCP connections from the client and creates new connections to the backends. By default, the original client IP address and port information is not preserved. Answer is D
upvoted 1 times
AzureDP900
1 year, 5 months ago
Yes, D is right
upvoted 1 times
...
...
ExamQnA
1 year, 10 months ago
Selected Answer: D
Ans: D (though it should have been "External TCP/UDP Network load balancers") Cant be (C), as they are not supported on standard tier: https://cloud.google.com/load-balancing/docs/load-balancing-overview
upvoted 4 times
...
Tabayashi
1 year, 11 months ago
Answer is (C). Use Internal TCP/UDP Load Balancing in the following circumstances: You need to forward the original packets unproxied. For example, if you need the client source IP address to be preserved. https://cloud.google.com/load-balancing/docs/internal#use_cases
upvoted 2 times
Arturo_Cloud
1 year, 7 months ago
I disagree with you, both C and D can keep the Client IP, however only TCP/UDP Network is for standard network. https://cloud.google.com/load-balancing/docs/network https://cloud.google.com/load-balancing/docs/load-balancing-overview
upvoted 1 times
...
...

Question 127

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 127 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 127
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You want to prevent users from accidentally deleting a Shared VPC host project. Which organization-level policy constraint should you enable?

  • A. compute.restrictSharedVpcHostProjects
  • B. compute.restrictXpnProjectLienRemoval
  • C. compute.restrictSharedVpcSubnetworks
  • D. compute.sharedReservationsOwnerProjects
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Tabayashi
Highly Voted 1 year, 11 months ago
Answer is (B). This boolean constraint restricts the set of users that can remove a Shared VPC project lien without organization-level permission where this constraint is set to True. https://cloud.google.com/resource-manager/docs/organization-policy/org-policy-constraints
upvoted 10 times
...
zellck
Highly Voted 1 year, 6 months ago
Selected Answer: B
B is the answer. https://cloud.google.com/resource-manager/docs/organization-policy/org-policy-constraints#constraints-for-specific-services - constraints/compute.restrictXpnProjectLienRemoval - Restrict shared VPC project lien removal This boolean constraint restricts the set of users that can remove a Shared VPC host project lien without organization-level permission where this constraint is set to True. By default, any user with the permission to update liens can remove a Shared VPC host project lien. Enforcing this constraint requires that permission be granted at the organization level.
upvoted 9 times
AzureDP900
1 year, 5 months ago
Agree with your explanation and Thank you for sharing the link
upvoted 2 times
...
...
Xoxoo
Most Recent 6 months, 3 weeks ago
Selected Answer: B
To prevent users from accidentally deleting a Shared VPC host project, you should enable the compute.restrictXpnProjectLienRemoval organization-level policy constraint . This policy constraint limits IAM principals who can remove the lien that prevents deletion of host projects . By default, a project owner can remove a lien from a project, including a Shared VPC host project, unless an organization-level policy is defined to limit lien removal . Therefore, option B is the correct answer.
upvoted 2 times
...
[Removed]
8 months, 2 weeks ago
Selected Answer: B
"B" GCP Shared VPC is formerly known as Google Cross-Project Networking (XPN) and still referred to as "XPN" in the API. References: https://cloud.google.com/vpc/docs/shared-vpc https://cloud.google.com/resource-manager/docs/organization-policy/org-policy-constraints#constraints-for-specific-services
upvoted 4 times
...
mikesp
1 year, 10 months ago
Selected Answer: B
https://cloud.google.com/resource-manager/docs/organization-policy/org-policy-constraints
upvoted 2 times
...

Question 128

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 128 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 128
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Users are reporting an outage on your public-facing application that is hosted on Compute Engine. You suspect that a recent change to your firewall rules is responsible. You need to test whether your firewall rules are working properly. What should you do?

  • A. Enable Firewall Rules Logging on the latest rules that were changed. Use Logs Explorer to analyze whether the rules are working correctly.
  • B. Connect to a bastion host in your VPC. Use a network traffic analyzer to determine at which point your requests are being blocked.
  • C. In a pre-production environment, disable all firewall rules individually to determine which one is blocking user traffic.
  • D. Enable VPC Flow Logs in your VPC. Use Logs Explorer to analyze whether the rules are working correctly.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
mikesp
Highly Voted 1 year, 10 months ago
Selected Answer: A
https://cloud.google.com/vpc/docs/firewall-rules-logging
upvoted 8 times
...
ExamQnA
Highly Voted 1 year, 10 months ago
Ans:A https://cloud.google.com/vpc/docs/firewall-rules-logging
upvoted 6 times
...
Xoxoo
Most Recent 6 months, 3 weeks ago
Selected Answer: A
To test whether your firewall rules are working properly, you can enable Firewall Rules Logging on the latest rules that were changed and use Logs Explorer to analyze whether the rules are working correctly. Firewall Rules Logging lets you audit, verify, and analyze the effects of your firewall rules. It generates an entry called a connection record each time a firewall rule allows or denies traffic. You can view these records in Cloud Logging and export logs to any destination that Cloud Logging export supports. By enabling Firewall Rules Logging on the latest rules that were changed, you can determine if a firewall rule designed to deny traffic is functioning as intended. This will help you identify whether the recent change to your firewall rules is responsible for the reported outage. Therefore, option A is the correct answer.
upvoted 4 times
...
AzureDP900
1 year, 5 months ago
A is right
upvoted 2 times
...

Question 129

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 129 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 129
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are a security administrator at your company. Per Google-recommended best practices, you implemented the domain restricted sharing organization policy to allow only required domains to access your projects. An engineering team is now reporting that users at an external partner outside your organization domain cannot be granted access to the resources in a project. How should you make an exception for your partner's domain while following the stated best practices?

  • A. Turn off the domain restriction sharing organization policy. Set the policy value to "Allow All."
  • B. Turn off the domain restricted sharing organization policy. Provide the external partners with the required permissions using Google's Identity and Access Management (IAM) service.
  • C. Turn off the domain restricted sharing organization policy. Add each partner's Google Workspace customer ID to a Google group, add the Google group as an exception under the organization policy, and then turn the policy back on.
  • D. Turn off the domain restricted sharing organization policy. Set the policy value to "Custom." Add each external partner's Cloud Identity or Google Workspace customer ID as an exception under the organization policy, and then turn the policy back on.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
mikesp
Highly Voted 2 years, 10 months ago
Selected Answer: D
The question is that is necessary to add identities from another Domain to cloud identity. The only way to do that is by adding the Customer Ids as exception. The procedure does not support adding groups, etc... The groups and the corresponding users can be added later on with Cloud Identity once that the domain of their organization is allowed: The allowed_values are Google Workspace customer IDs, such as C03xgje4y. Only identities belonging to a Google Workspace domain from the list of allowed_values will be allowed on IAM policies once this organization policy has been applied. Google Workspace human users and groups must be part of that Google Workspace domain, and IAM service accounts must be children of an organization resource associated with the given Google Workspace domain
upvoted 13 times
AzureDP900
2 years, 5 months ago
Agreed with your explanaiton
upvoted 1 times
...
...
bartlomiejwaw
Highly Voted 2 years, 11 months ago
Selected Answer: C
Policy should be turned on at the end. Adding the whole group as an exception is far more reasonable than adding all identities.
upvoted 5 times
mT3
2 years, 10 months ago
I agree Ref: https://cloud.google.com/resource-manager/docs/organization-policy/restricting-domains#setting_the_organization_policy
upvoted 1 times
...
adriannieto
2 years, 1 month ago
Agree, it should be C
upvoted 1 times
...
gcpengineer
1 year, 10 months ago
u can not add customer ID to a google group
upvoted 1 times
...
adriannieto
2 years, 1 month ago
To add more context here's the forcing access doc. https://cloud.google.com/resource-manager/docs/organization-policy/restricting-domains#forcing_access
upvoted 1 times
fad3r
2 years ago
If you actually follow this link this is discussing service accounts. Alternatively, you can grant access to a Google group that contains the relevant service accounts: Create a Google group within the allowed domain. Use the Google Workspace administrator panel to turn off domain restriction for that group. Add the service account to the group. Grant access to the Google group in the IAM policy. This does not mention service accounts. It just as easily be users or other resources.
upvoted 1 times
...
...
...
BPzen
Most Recent 4 months, 1 week ago
Selected Answer: D
Why D is Correct: Custom Exceptions for Partner Domains: By setting the policy to "Custom," you can explicitly list the external partner's Cloud Identity or Google Workspace customer ID as an exception. This allows resources to be shared with the specified external domain while maintaining domain restriction for all other domains. Enforcing Best Practices: Turning the policy back on ensures that the domain restricted sharing remains enforced across your organization. Granular Control: Using customer IDs ensures that only the intended partner domain is granted access. This approach avoids unnecessary exposure to other domains.
upvoted 1 times
...
Xoxoo
1 year, 6 months ago
Selected Answer: D
To make an exception for your partner’s domain while following the stated best practices, you can add each external partner’s Cloud Identity or Google Workspace customer ID as an exception under the organization policy. To do this, you need to turn off the domain restricted sharing organization policy and set the policy value to “Custom” . You can then add each external partner’s Cloud Identity or Google Workspace customer ID as an exception under the organization policy and turn the policy back on . Alternatively, you can add each partner’s Google Workspace customer ID to a Google group, add the Google group as an exception under the organization policy, and then turn the policy back on . This approach is useful when you have multiple external partners that need access to your resources .
upvoted 3 times
...
AwesomeGCP
2 years, 6 months ago
Selected Answer: D
D. Turn off the domain restricted sharing organization policy. Set the policy value to "Custom." Add each external partner's Cloud Identity or Google Workspace customer ID as an exception under the organization policy, and then turn the policy back on.
upvoted 2 times
...
zellck
2 years, 6 months ago
Selected Answer: D
D is the answer. https://cloud.google.com/resource-manager/docs/organization-policy/restricting-domains#setting_the_organization_policy The domain restriction constraint is a type of list constraint. Google Workspace customer IDs can be added and removed from the allowed_values list of a domain restriction constraint. The domain restriction constraint does not support denying values, and an organization policy can't be saved with IDs in the denied_values list. All domains associated with a Google Workspace account listed in the allowed_values will be allowed by the organization policy. All other domains will be denied by the organization policy.
upvoted 3 times
AzureDP900
2 years, 5 months ago
Thank you for detailed explanation
upvoted 1 times
...
...
sumundada
2 years, 8 months ago
Selected Answer: D
https://cloud.google.com/resource-manager/docs/organization-policy/restricting-domains#setting_the_organization_policy
upvoted 3 times
...
Medofree
2 years, 10 months ago
The right answer is D. Because we add the "Customer ID" as exception and not Google group. https://cloud.google.com/resource-manager/docs/organization-policy/restricting-domains#setting_the_organization_policy
upvoted 1 times
...

Question 130

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 130 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 130
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You plan to use a Google Cloud Armor policy to prevent common attacks such as cross-site scripting (XSS) and SQL injection (SQLi) from reaching your web application's backend. What are two requirements for using Google Cloud Armor security policies? (Choose two.)

  • A. The load balancer must be an external SSL proxy load balancer.
  • B. Google Cloud Armor Policy rules can only match on Layer 7 (L7) attributes.
  • C. The load balancer must use the Premium Network Service Tier.
  • D. The backend service's load balancing scheme must be EXTERNAL.
  • E. The load balancer must be an external HTTP(S) load balancer.
Show Suggested Answer Hide Answer
Suggested Answer: DE 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
i_am_robot
8 months ago
Selected Answer: DE
Here's the reasoning: D is correct because according to search result , one of the requirements for using Google Cloud Armor security policies is that "The backend service's load balancing scheme must be EXTERNAL, EXTERNAL_MANAGED, or INTERNAL_MANAGED." The EXTERNAL scheme is specifically mentioned in the answer option. E is correct because Google Cloud Armor is primarily designed to work with HTTP(S) load balancers. This is supported by multiple search results, including which states that Google Cloud Armor security policies protect "Global external Application Load Balancer (HTTP/HTTPS)" among others.
upvoted 1 times
...
LaithTech
8 months ago
Google Cloud Armor is only supported with the Premium Network Service Tier. The Standard Tier does not support Google Cloud Armor features.
upvoted 1 times
nah99
4 months, 3 weeks ago
This says otherwise https://cloud.google.com/armor/docs/security-policy-overview#requirements
upvoted 1 times
...
...
Bettoxicity
1 year ago
Selected Answer: BE
BE B: Google Cloud Armor operates at Layer 7 (application layer) of the OSI model. Its security policies inspect incoming HTTP(S) requests and can match on various L7 attributes like request headers, body content, and URI paths. This allows you to define rules that block attacks like XSS and SQLi based on their specific characteristics. Why not C: The load balancing scheme of the backend service (internal or external) doesn't impact Cloud Armor's operation. Cloud Armor focuses on filtering traffic at the external load balancer level.
upvoted 1 times
...
aygitci
1 year, 6 months ago
Why not B?
upvoted 2 times
...
Xoxoo
1 year, 6 months ago
Selected Answer: DE
To use Google Cloud Armor security policies to prevent common attacks such as cross-site scripting (XSS) and SQL injection (SQLi) from reaching your web application’s backend, you need to meet the following requirements : 1) The load balancer must be a global external Application Load Balancer, a classic Application Load Balancer, a regional external Application Load Balancer, or an external proxy Network Load Balancer . 2) The backend service’s load balancing scheme must be EXTERNAL, or EXTERNAL_MANAGED if you are using either a global external Application Load Balancer or a regional external Application Load Balancer .
upvoted 1 times
...
[Removed]
1 year, 8 months ago
Selected Answer: DE
"D", "E" As others noted in the comments, "A","D" and "E" all meet the minimum requirements for setting up Cloud Armor. However part of the question is having WAF functionality which is not available for External SSL Proxy LBs (A) (no checkmark under external proxy lb column for WAF row). This which leaves us with D and E only. References: https://cloud.google.com/armor/docs/security-policy-overview#requirements https://cloud.google.com/armor/docs/security-policy-overview#
upvoted 1 times
...
gcpengineer
1 year, 10 months ago
Now we can manage also network load balancer
upvoted 1 times
...
gcpengineer
1 year, 10 months ago
Selected Answer: DE
DE is the ans
upvoted 1 times
...
AzureDP900
2 years, 5 months ago
D,E is most appropriate in this case D. The backend service's load balancing scheme must be EXTERNAL. E. The load balancer must be an external HTTP(S) load balancer.
upvoted 2 times
...
soltium
2 years, 6 months ago
Selected Answer: DE
DE. Well technically you can use EXTERNAL_MANAGED scheme too.
upvoted 1 times
...
AwesomeGCP
2 years, 6 months ago
Selected Answer: DE
D. The backend service's load balancing scheme must be EXTERNAL. E. The load balancer must be an external HTTP(S) load balancer.
upvoted 1 times
...
Jeanphi72
2 years, 7 months ago
Selected Answer: DE
https://cloud.google.com/armor/docs/security-policy-overview#requirements says: The backend service's load balancing scheme must be EXTERNAL, or EXTERNAL_MANAGED *** if you are using global external HTTP(S) load balancer ***. Thus D and E fit (A could fit if a suggestion like The backend service's load balancing scheme must ** NOT ** be EXTERNAL
upvoted 2 times
...
piyush_1982
2 years, 8 months ago
I am not sure if there is some mistake in the question or in the options given. https://cloud.google.com/armor/docs/security-policy-overview#requirements As per the link above, below are the requirements for using Google Cloud Armor security policies: 1. The load balancer must be a global external HTTP(S) load balancer, global external HTTP(S) load balancer (classic), external TCP proxy load balancer, or external SSL proxy load balancer. 2. The backend service's load balancing scheme must be EXTERNAL, or EXTERNAL_MANAGED if you are using a global external HTTP(S) load balancer. 3. The backend service's protocol must be one of HTTP, HTTPS, HTTP/2, TCP, or SSL. The correct answer seems to be A D and E. A. The load balancer must be an external SSL proxy load balancer. (external SSL proxy load balancer is one of the load balancing options listed in the link) D. The backend service's load balancing scheme must be EXTERNAL. (or EXTERNAL_MANAGED) E. The load balancer must be an external HTTP(S) load balancer. (Also one of the options listed)
upvoted 3 times
zellck
2 years, 6 months ago
Security policy for A does not block XSS and SQLi which is at layer 7. https://cloud.google.com/armor/docs/security-policy-overview#policy-types
upvoted 5 times
TNT87
2 years ago
Not true....Security policy overview bookmark_border Google Cloud Armor security policies protect your application by providing Layer 7 filtering and by scrubbing incoming requests for common web attacks or other Layer 7 attributes to potentially block traffic before it reaches your load balanced backend services or backend buckets. Each security policy is made up of a set of rules that filter traffic based on conditions such as an incoming request's IP address, IP range, region code, or request headers. Google Cloud Armor security policies are available only for backend services of global external HTTP(S) load balancers, global external HTTP(S) load balancer (classic)s, external TCP proxy load balancers, or external SSL proxy load balancers. The load balancer can be in Premium Tier or Standard Tier.https://cloud.google.com/armor/docs/security-policy-overview . A, D,E are correct
upvoted 1 times
[Removed]
1 year, 8 months ago
If you look at the table here, you'll see that the row that has "WAF" (which is what you need here for web application firewall) is unchecked under the External Proxy LB column. This disqualifies "A" from the answer and leaves us with "D" and "E" only. Reference: https://cloud.google.com/armor/docs/security-policy-overview#expandable-1 So good catch piyush_1982 and zellck !
upvoted 1 times
...
...
...
...
nacying
2 years, 10 months ago
Selected Answer: DE
These are the requirements for using Google Cloud Armor security policies: The load balancer must be an external HTTP(S) load balancer, TCP proxy load balancer, or SSL proxy load balancer. The backend service's load balancing scheme must be EXTERNAL. The backend service's protocol must be one of HTTP, HTTPS, HTTP/2, TCP, or SSL. https://cloud.google.com/armor/docs/security-policy-overview
upvoted 1 times
...
cloudprincipal
2 years, 10 months ago
Selected Answer: DE
DE Requirements These are the requirements for using Google Cloud Armor security policies: * The load balancer must be an external HTTP(S) load balancer, TCP proxy load balancer, or SSL proxy load balancer. * The backend service's load balancing scheme must be EXTERNAL. * The backend service's protocol must be one of HTTP, HTTPS, HTTP/2, TCP, or SSL. See https://cloud.google.com/armor/docs/security-policy-overview#requirements
upvoted 3 times
...
szl0144
2 years, 10 months ago
Google Cloud Armor security policies are sets of rules that match on attributes from Layer 3 to Layer 7 to protect externally facing applications or services. Each rule is evaluated with respect to incoming traffic. I choose DE
upvoted 1 times
...
ExamQnA
2 years, 10 months ago
Ans:D,E https://cloud.google.com/armor/docs/security-policy-overview Relevant extracts: 1. Google Cloud Armor security policies enable you to rate-limit or redirect requests to your HTTP(S) Load Balancing, TCP Proxy Load Balancing, or SSL Proxy Load Balancing ... 2. Google Cloud Armor security policies are sets of rules that match on attributes from Layer 3 to Layer 7 to protect externally facing applications or services... 3. The load balancer can be in Premium Tier or Standard Tier.
upvoted 3 times
...

Question 131

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 131 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 131
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You perform a security assessment on a customer architecture and discover that multiple VMs have public IP addresses. After providing a recommendation to remove the public IP addresses, you are told those VMs need to communicate to external sites as part of the customer's typical operations. What should you recommend to reduce the need for public IP addresses in your customer's VMs?

  • A. Google Cloud Armor
  • B. Cloud NAT
  • C. Cloud Router
  • D. Cloud VPN
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Random_Mane
Highly Voted 7 months, 1 week ago
Selected Answer: B
B. https://cloud.google.com/nat/docs/overview
upvoted 7 times
...
AzureDP900
Most Recent 5 months, 1 week ago
B Cloud NAT
upvoted 2 times
...

Question 132

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 132 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 132
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are tasked with exporting and auditing security logs for login activity events for Google Cloud console and API calls that modify configurations to Google
Cloud resources. Your export must meet the following requirements:
✑ Export related logs for all projects in the Google Cloud organization.
✑ Export logs in near real-time to an external SIEM.
What should you do? (Choose two.)

  • A. Create a Log Sink at the organization level with a Pub/Sub destination.
  • B. Create a Log Sink at the organization level with the includeChildren parameter, and set the destination to a Pub/Sub topic.
  • C. Enable Data Access audit logs at the organization level to apply to all projects.
  • D. Enable Google Workspace audit logs to be shared with Google Cloud in the Admin Console.
  • E. Ensure that the SIEM processes the AuthenticationInfo field in the audit log entry to gather identity information.
Show Suggested Answer Hide Answer
Suggested Answer: BC 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
cloudprincipal
Highly Voted 2 years, 10 months ago
Selected Answer: BD
B because for all projects D "Google Workspace Login Audit: Login Audit logs track user sign-ins to your domain. These logs only record the login event. They don't record which system was used to perform the login action." https://cloud.google.com/logging/docs/audit/gsuite-audit-logging#services
upvoted 13 times
exambott
2 years, 2 months ago
Google cloud logs is different from Google Workspace logs. D is definitely incorrect.
upvoted 1 times
...
mikez2023
2 years, 1 month ago
There is no mentioning anything like "Google Workspace", why is D correct?
upvoted 2 times
...
...
ExamQnA
Highly Voted 2 years, 10 months ago
Ans:B,C https://cloud.google.com/logging/docs/export/aggregated_sinks: To use aggregated sinks, you create a sink in a Google Cloud organization or folder, and set the sink's includeChildren parameter to True. That sink can then route log entries from the organization or folder, plus (recursively) from any contained folders, billing accounts, or Cloud projects. https://cloud.google.com/logging/docs/audit#data-access Data Access audit logs-- except for BigQuery Data Access audit logs-- are disabled by default because audit logs can be quite large. If you want Data Access audit logs to be written for Google Cloud services other than BigQuery, you must explicitly enable them
upvoted 12 times
passex
2 years, 3 months ago
There is no mention about 'data access logs' in question
upvoted 2 times
Nik2592s
1 year, 10 months ago
API calls are tracked in Data access logs
upvoted 4 times
luca_scalzotto
1 year, 2 months ago
The question state: "API calls that modify configurations to Google Cloud resources". From the documentation: "Admin Activity audit logs contain log entries for API calls or other actions that modify the configuration or metadata of resources. For example, these logs record when users create VM instances or change Identity and Access Management permissions." Therefore, cannot be C
upvoted 1 times
...
...
...
...
BPzen
Most Recent 4 months, 1 week ago
Selected Answer: BE
Why B. Create a Log Sink at the organization level with the includeChildren parameter and set the destination to a Pub/Sub topic is Correct: E. Ensure that the SIEM processes the AuthenticationInfo field in the audit log entry to gather identity information. Why Not the Other Options: C Enabling Data Access logs is not required for this use case. The question only asks for login activity and configuration changes, which are captured in Admin Activity logs D. Enable Google Workspace audit logs This is not directly relevant. Google Workspace audit logs are not required for capturing Google Cloud login activity and configuration changes.
upvoted 1 times
...
Mr_MIXER007
7 months, 1 week ago
Selected Answer: BC
B because for all projects С
upvoted 1 times
...
60090d7
8 months ago
Selected Answer: BD
turn on audit and sink, pub-sub (near realtime)
upvoted 1 times
...
piipo
10 months ago
Selected Answer: BC
No Workspace
upvoted 1 times
...
pico
11 months ago
Selected Answer: BC
why the other options are not as suitable: A: While creating a log sink at the organization level is correct, it won't include logs from child projects unless the includeChildren parameter is set to true. D: Google Workspace audit logs are separate from Google Cloud audit logs and won't provide the required information about Google Cloud console logins or API calls. E: While processing the AuthenticationInfo field is essential for identifying actors, it is not a step in the setup of the log export itself.
upvoted 2 times
...
Bettoxicity
1 year ago
Selected Answer: AE
AE A: Setting up a Log Sink at the organization level with Pub/Sub as the destination guarantees you capture logs from all projects within your organization. E: The AuthenticationInfo field within audit log entries provides valuable details about the user or service that made the configuration change or login attempt. Your SIEM needs to be able to process this field to extract identity information for security audit purposes. B. IncludeChildren Parameter (Not Required) C. Data Access Audit Logs (Not Specific)
upvoted 1 times
...
gurusen88
1 year, 1 month ago
B & E B. Organization Level Log Sink with includeChildren parameter: Creating a log sink at the organization level with the includeChildren parameter ensures that you capture logs from all projects within the organization. Setting the destination to a Pub/Sub topic is suitable for real-time log export, meeting the requirement to export logs in near real-time to an external SIEM. E. Processing the AuthenticationInfo field: The AuthenticationInfo field in the audit log entries contains identity information, which is crucial for auditing security logs for login activity. Ensuring that the SIEM processes this field allows for a detailed analysis of who is accessing what, fulfilling the requirement to audit login activity events and API calls that modify configurations.
upvoted 2 times
...
mjcts
1 year, 3 months ago
Selected Answer: BC
No mention of Google Workspace
upvoted 3 times
...
loonytunes
1 year, 5 months ago
ANS: B,D Api calls that modify configuration of resources are in Admin Activity audit logs, which are on by default (along with System Events and Deny Policies). Thus not C. You can also enable Google Workspace logs to be forwarded to Google cloud at the Org Level Same Link. https://cloud.google.com/logging/docs/audit/gsuite-audit-logging#log-types
upvoted 1 times
...
aygitci
1 year, 6 months ago
Selected Answer: BC
Not mention og Google Workspace, definitely not D
upvoted 3 times
...
Xoxoo
1 year, 6 months ago
Selected Answer: BC
To export and audit security logs for login activity events in the Google Cloud Console and API calls that modify configurations to Google Cloud resources with the specified requirements, you should take the following steps: B. Create a Log Sink at the organization level with the includeChildren parameter and set the destination to a Pub/Sub topic: This step will export related logs from all projects within the Google Cloud organization, including the logs you need. The use of Pub/Sub allows near real-time export of logs. C. Enable Data Access audit logs at the organization level to apply to all projects: Enabling Data Access audit logs at the organization level ensures that logs related to API calls that modify configurations to Google Cloud resources are captured.
upvoted 5 times
Xoxoo
1 year, 6 months ago
The other options are not relevant or necessary for meeting the specified requirements: D. "Enable Google Workspace audit logs to be shared with Google Cloud in the Admin Console" is not directly related to exporting logs for Google Cloud Console and API calls. E. "Ensure that the SIEM processes the AuthenticationInfo field in the audit log entry to gather identity information" is a consideration for how the SIEM system processes logs but is not a configuration step for exporting logs.
upvoted 2 times
...
...
desertlotus1211
1 year, 7 months ago
Can someone explain how or why 'D' can be correct? The logs are Google Cloud not Workspace...
upvoted 2 times
...
[Removed]
1 year, 8 months ago
Selected Answer: BD
"B", "D" B because you need an aggregate sink to recursively pull from children entities otherwise scope is limited to the specific level where it's created. So this also excludes A. https://cloud.google.com/logging/docs/export/aggregated_sinks#create_an_aggregated_sink C - Data Access Audit Logs - Even though they include API events, they don't explicitly say they also include log-in events. https://cloud.google.com/logging/docs/audit#data-access D - For Workspace Audit Logs, they explicitly say that API calls and log-in events are captured which makes it a more complete option than "C". Also, cloud identity, which is used to manage users of GCP, is a workspace service. It would make sense that workspace logging providing cloud identity related sign-in logs. https://cloud.google.com/logging/docs/audit/gsuite-audit-logging https://support.google.com/cloudidentity/answer/7319251
upvoted 1 times
...
gcpengineer
1 year, 10 months ago
Selected Answer: BE
change to BE
upvoted 2 times
...
gcpengineer
1 year, 10 months ago
Selected Answer: BC
BC looks lik ans
upvoted 3 times
...

Question 133

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 133 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 133
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your company's Chief Information Security Officer (CISO) creates a requirement that business data must be stored in specific locations due to regulatory requirements that affect the company's global expansion plans. After working on the details to implement this requirement, you determine the following:
✑ The services in scope are included in the Google Cloud Data Residency Terms.
✑ The business data remains within specific locations under the same organization.
✑ The folder structure can contain multiple data residency locations.
You plan to use the Resource Location Restriction organization policy constraint. At which level in the resource hierarchy should you set the constraint?

  • A. Folder
  • B. Resource
  • C. Project
  • D. Organization
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
mouchu
Highly Voted 2 years, 11 months ago
Answer = C "The folder structure can contain multiple data residency locations" suggest that restriction should be applied on projects level
upvoted 23 times
piyush_1982
2 years, 8 months ago
why not D?
upvoted 2 times
...
AzureDP900
2 years, 5 months ago
Yes, It is C. This is very tricky question and we need to read very carefully. In general Folders will used but in this case Project is right
upvoted 3 times
AzureDP900
2 years, 5 months ago
Q 137 is same
upvoted 1 times
...
...
...
Taliesyn
Highly Voted 2 years, 11 months ago
Selected Answer: A
Org policies can't be applied on resources ...
upvoted 6 times
...
Mauratay
Most Recent 1 month, 3 weeks ago
Selected Answer: B
Reference: https://cloud.google.com/resource-manager/docs/organization-policy/defining-locations#overview A policy that includes this constraint will not be enforced on sub-resource creation for certain services, such as Cloud Storage and Dataproc. https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy#inheritance Cloud Storage is a resource eligibile for location constraints. All other options would be viable with the use of value groups, at either org, folder or project level, however, the only clue here is their data to be stored, which points to cloud storage. https://cloud.google.com/resource-manager/docs/organization-policy/defining-locations#value_groups
upvoted 1 times
...
BPzen
4 months, 1 week ago
Selected Answer: C
"The folder structure can contain multiple data residency locations" suggest that restriction should be applied on projects level
upvoted 1 times
...
MFay
11 months, 2 weeks ago
Since you need to ensure that business data remains within specific locations under the same organization and the folder structure can contain multiple data residency locations, you should set the Resource Location Restriction organization policy constraint at the Organization level. Therefore, the correct answer is: D. Organization
upvoted 1 times
...
Bettoxicity
1 year ago
Selected Answer: A
A Why not C?: Project-level constraints wouldn't offer the desired level of granularity. You might have data in a single project that needs to be stored in different locations based on regulations. Why no D?: Organization: An organization-level constraint would restrict all resources within the organization to a single residency location, which wouldn't meet the need for differentiated locations for various data sets.
upvoted 1 times
...
dija123
1 year ago
Selected Answer: C
Agree with C
upvoted 1 times
...
desertlotus1211
1 year, 7 months ago
https://cloud.google.com/assured-workloads/docs/data-residency#:~:text=Organizations%20with%20data%20residency%20requirements,select%20your%20desired%20compliance%20program. Organizations with data residency requirements can set up a Resource Locations policy that constrains the location of new in-scope resources for their whole organization or for individual projects. Answer C is a better choice, though this documenttalks about folders. But the questions says there are multiple data residency locations in that folders, so project level seems to be the best.
upvoted 2 times
...
[Removed]
1 year, 8 months ago
Selected Answer: C
These restrictions can be applied at Org level, Folder Level or Project Level, but not resource level. Also, these policies are inherited, which means they need to be applied at the lowest child possible in the hierarchy where this is needed, not higher. This makes the answer specific to the use case rather than textbook knowledge. According to the given: "The folder structure can contain multiple data residency locations". This means that applying location restrictions at the Folder level or above will violate the requirement.This means you must apply the constraint at Project level. Quotes from the references below: "You can also apply the organization policy to a folder or a project with the folder or the project flags, and the folder ID and project ID, respectively." - no mention of resource level References: https://cloud.google.com/resource-manager/docs/organization-policy/understanding-hierarchy https://cloud.google.com/resource-manager/docs/organization-policy/using-constraints
upvoted 4 times
...
[Removed]
1 year, 8 months ago
"C" Project Level These restrictions can be applied at Org level, Folder Level or Project Level, but not resource level. Also, these policies are inherited, which means they need to be applied at the lowest child possible in the hierarchy where this is needed, not higher. This makes the answer specific to the use case rather than textbook knowledge. According to the given: "The folder structure can contain multiple data residency locations". This means that applying location restrictions at the Folder level or above will violate the requirement.This means you must apply the constraint at Project level. Quotes from the references below: "You can also apply the organization policy to a folder or a project with the folder or the project flags, and the folder ID and project ID, respectively." - no mention of resource level References: https://cloud.google.com/resource-manager/docs/organization-policy/understanding-hierarchy https://cloud.google.com/resource-manager/docs/organization-policy/using-constraints
upvoted 2 times
...
gcpengineer
1 year, 10 months ago
Selected Answer: C
C is the ans
upvoted 3 times
...
AnishAd
1 year, 12 months ago
C it is ----> Imp line to read from Question to understand why At Project level : 1. business data must be stored in specific locations due to regulatory requirements & The folder structure can contain multiple data residency locations. --- > Since Folder is going to contain multiple data residency locations and requirement is to restrict in specific location , so Constraints should be set at project level.
upvoted 2 times
...
alleinallein
2 years ago
Selected Answer: C
Project level seems to be reasonable.
upvoted 2 times
...
marrechea
2 years ago
Selected Answer: C
As "The folder structure can contain multiple data residency locations." it has to be at project level
upvoted 2 times
...
fad3r
2 years ago
A lot of madness in these answers. It is C. You cant apply it at the org level since that effects everything. You cant apply it at the folder level since can contain locations. You CAN apply it at the project level. For those who say you cant apply these policies at the org level I suggest you spend more time reading docs and testing things in a lab. https://cloud.google.com/blog/products/identity-security/meet-data-residency-requirements-with-google-cloud To strengthen these controls further, Google Cloud offers Organization Policy constraints which can be applied at the organization, folder, or project level
upvoted 3 times
...
adelynllllllllll
2 years, 4 months ago
the answer should be B https://cloud.google.com/resource-manager/docs/organization-policy/defining-locations
upvoted 1 times
...
Rightsaidfred
2 years, 4 months ago
Selected Answer: C
Different Locations therefore needs to be applied at Project Level.
upvoted 4 times
...

Question 134

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 134 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 134
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You need to set up a Cloud interconnect connection between your company's on-premises data center and VPC host network. You want to make sure that on- premises applications can only access Google APIs over the Cloud Interconnect and not through the public internet. You are required to only use APIs that are supported by VPC Service Controls to mitigate against exfiltration risk to non-supported APIs. How should you configure the network?

  • A. Enable Private Google Access on the regional subnets and global dynamic routing mode.
  • B. Set up a Private Service Connect endpoint IP address with the API bundle of "all-apis", which is advertised as a route over the Cloud interconnect connection.
  • C. Use private.googleapis.com to access Google APIs using a set of IP addresses only routable from within Google Cloud, which are advertised as routes over the connection.
  • D. Use restricted googleapis.com to access Google APIs using a set of IP addresses only routable from within Google Cloud, which are advertised as routes over the Cloud Interconnect connection.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Nicky1402
Highly Voted 2 years, 5 months ago
I think the correct answer is D. It is mentioned in the question: "You are required to only use APIs that are supported by VPC Service Controls", from which we can understand that we cannot use private.googleapis.com. Hence, option A & C can be eliminated. API bundle with all-apis is mentioned in option B which is wrong as we want to use only those APIs supported by VPC service controls. Hence, option B can be eliminated. Option D has all the solutions we need. https://cloud.google.com/vpc/docs/private-service-connect An API bundle: All APIs (all-apis): most Google APIs (same as private.googleapis.com). VPC-SC (vpc-sc): APIs that VPC Service Controls supports (same as restricted.googleapis.com). VMs in the same VPC network as the endpoint (all regions) On-premises systems that are connected to the VPC network that contains the endpoint
upvoted 13 times
AzureDP900
1 year, 11 months ago
Yes, It is D
upvoted 1 times
AzureDP900
1 year, 11 months ago
D. Use restricted googleapis.com to access Google APIs using a set of IP addresses only routable from within Google Cloud, which are advertised as routes over the Cloud Interconnect connection.
upvoted 1 times
...
...
...
dija123
Most Recent 6 months, 2 weeks ago
Selected Answer: D
Answer is D
upvoted 1 times
...
[Removed]
1 year, 2 months ago
Selected Answer: D
"D" restricted.googleapis.com https://cloud.google.com/vpc-service-controls/docs/set-up-private-connectivity#procedure-overview
upvoted 2 times
...
shayke
1 year, 9 months ago
Selected Answer: D
D- route from on prem
upvoted 1 times
...
samuelmorher
1 year, 9 months ago
Selected Answer: D
it's D
upvoted 2 times
...
marmar11111
1 year, 11 months ago
Selected Answer: D
https://cloud.google.com/vpc/docs/configure-private-google-access-hybrid Choose restricted.googleapis.com when you only need access to Google APIs and services that are supported by VPC Service Controls.
upvoted 1 times
...
AwesomeGCP
2 years ago
Selected Answer: D
D. Use restricted googleapis.com to access Google APIs using a set of IP addresses only routable from within Google Cloud, which are advertised as routes over the Cloud Interconnect connection.
upvoted 2 times
...
zellck
2 years ago
Selected Answer: D
D is the answer. https://cloud.google.com/vpc/docs/configure-private-google-access-hybrid#config-choose-domain If you need to restrict users to just the Google APIs and services that support VPC Service Controls, use restricted.googleapis.com. Although VPC Service Controls are enforced for compatible and configured services, regardless of the domain you use, restricted.googleapis.com provides additional risk mitigation for data exfiltration. Using restricted.googleapis.com denies access to Google APIs and services that are not supported by VPC Service Controls.
upvoted 1 times
...
bnikunj
2 years, 1 month ago
D is answer, https://cloud.google.com/vpc/docs/configure-private-service-connect-apis#supported-apis The all-apis bundle provides access to the same APIs as private.googleapis.com Choose vpc-sc when you only need access to Google APIs and services that are supported by VPC Service Controls. The vpc-sc bundle does not permit access to Google APIs and services that do not support VPC Service Controls. 1
upvoted 1 times
...
cloudprincipal
2 years, 4 months ago
Selected Answer: D
Will agree with the others
upvoted 2 times
cloudprincipal
2 years, 3 months ago
This is actually specified in the documentation: https://cloud.google.com/vpc/docs/configure-private-google-access-hybrid#config-choose-domain
upvoted 3 times
...
...
ExamQnA
2 years, 4 months ago
Ans: D Note: If you need to restrict users to just the Google APIs and services that support VPC Service Controls, use restricted.googleapis.com. https://cloud.google.com/vpc/docs/configure-private-google-access-hybrid
upvoted 3 times
...

Question 135

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 135 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 135
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You need to implement an encryption-at-rest strategy that protects sensitive data and reduces key management complexity for non-sensitive data. Your solution has the following requirements:
✑ Schedule key rotation for sensitive data.
✑ Control which region the encryption keys for sensitive data are stored in.
✑ Minimize the latency to access encryption keys for both sensitive and non-sensitive data.
What should you do?

  • A. Encrypt non-sensitive data and sensitive data with Cloud External Key Manager.
  • B. Encrypt non-sensitive data and sensitive data with Cloud Key Management Service.
  • C. Encrypt non-sensitive data with Google default encryption, and encrypt sensitive data with Cloud External Key Manager.
  • D. Encrypt non-sensitive data with Google default encryption, and encrypt sensitive data with Cloud Key Management Service.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
GHOST1985
Highly Voted 2 years, 1 month ago
Selected Answer: D
Answer D because "Minimize the latency to access encryption keys"
upvoted 12 times
GHOST1985
2 years ago
Sorry answer is B
upvoted 3 times
...
...
marmar11111
Highly Voted 1 year, 11 months ago
Selected Answer: D
The default already has low latency! "Because of the high volume of keys at Google, and the need for low latency and high availability, DEKs are stored near the data that they encrypt. DEKs are encrypted with (wrapped by) a key encryption key (KEK), using a technique known as envelope encryption. These KEKs are not specific to customers; instead, one or more KEKs exist for each service." We need less complexity and low latency so use default on non-sensitive data!
upvoted 6 times
adb4007
8 months, 3 weeks ago
And keep KMS to be complience with sensitive data strategy
upvoted 1 times
...
...
shayke
Most Recent 1 year, 9 months ago
Selected Answer: D
D- the ans refers to both types of data:sensitive and non sensitive
upvoted 4 times
...
TonytheTiger
1 year, 10 months ago
Answer D https://cloud.google.com/docs/security/encryption/default-encryption
upvoted 6 times
...
AzureDP900
1 year, 11 months ago
B. Encrypt non-sensitive data and sensitive data with Cloud Key Management Service.
upvoted 2 times
...
coco10k
1 year, 11 months ago
Selected Answer: D
keeps complexity low
upvoted 3 times
...
AwesomeGCP
2 years ago
Selected Answer: B
B. Encrypt non-sensitive data and sensitive data with Cloud Key Management Service.
upvoted 1 times
...
GHOST1985
2 years ago
Selected Answer: B
✑ Schedule key rotation for sensitive data. : => Cloud KMS allows you to set a rotation schedule for symmetric keys to automatically generate a new key version at a fixed time interval. Multiple versions of a symmetric key can be active at any time for decryption, with only one primary key version used for encrypting new data. With EKM, create an externally managed key directly from the Cloud KSM console. ✑ Control which region the encryption keys for sensitive data are stored in. => If using Cloud KMS, your cryptographic keys will be stored in the region where you deploy the resource. You also have the option of storing those keys inside a physical Hardware Security Module located in the region you choose with Cloud HSM. ✑ Minimize the latency to access encryption keys for both sensitive and non-sensitive data : => Cloud KMS is available in several global locations and across multi-regions, allowing you to place your service where you want for low latency and high availability. https://cloud.google.com/security-key-management
upvoted 3 times
adb4007
8 months, 3 weeks ago
You right and you need "reduces key management complexity for non-sensitive data" that why I go for D
upvoted 1 times
...
...

Question 136

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 136 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 136
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your security team uses encryption keys to ensure confidentiality of user data. You want to establish a process to reduce the impact of a potentially compromised symmetric encryption key in Cloud Key Management Service (Cloud KMS).
Which steps should your team take before an incident occurs? (Choose two.)

  • A. Disable and revoke access to compromised keys.
  • B. Enable automatic key version rotation on a regular schedule.
  • C. Manually rotate key versions on an ad hoc schedule.
  • D. Limit the number of messages encrypted with each key version.
  • E. Disable the Cloud KMS API.
Show Suggested Answer Hide Answer
Suggested Answer: BD 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
parasthakur
Highly Voted 2 years ago
Selected Answer: BD
Should be BD. A is wrong because there is no comprise happened as the question states "before an incident". As per document "Limiting the number of messages encrypted with the same key version helps prevent attacks enabled by cryptanalysis." https://cloud.google.com/kms/docs/key-rotation
upvoted 10 times
...
zellck
Highly Voted 2 years ago
Selected Answer: BD
BD is the answer. The steps need to be done BEFORE an incident occurs.
upvoted 9 times
AzureDP900
1 year, 11 months ago
Yes, B and D
upvoted 4 times
...
...
glb2
Most Recent 6 months, 3 weeks ago
Selected Answer: AB
A. Disable and revoke access to compromised keys. B. Enable automatic key version rotation on a regular schedule.
upvoted 1 times
glb2
6 months, 2 weeks ago
I think I made a mistake. After consideration the correct answer is B and D.
upvoted 1 times
...
...
[Removed]
1 year, 2 months ago
Selected Answer: AB
A,B Keys get stolen by attacker then attacker infiltrates the network using those keys. The incident/compromise is when the attacker penetrates and steals data not when the key is stolen. Theft happens when the burglar enters your house and steal stuff not when they make a copy of your house key. If you suspect someone made a copy of your key you go and change the locks and throw away your compromised keys before the incident occurs. So we're in the situation where there are "potentially compromised" keys and need to take action before the attacker uses the keys and hacks the company. We disable access to potentially compromised keys and rotate. https://cloud.google.com/kms/docs/key-rotation "If you suspect that a key version is compromised, disable it and revoke access to it as soon as possible."
upvoted 2 times
[Removed]
1 year, 2 months ago
That said, they did say "establish a process" which might indicate it's due diligence rather response to an actual key compromise. So I can see how B, D could be correct. Poorly worded question overall.
upvoted 2 times
...
...
PST21
1 year, 9 months ago
You want to reduce the impact - which will be post the issue has occurred so has to be AB. If asked for preventive steps then B &D.
upvoted 2 times
...
spiritix821
1 year, 9 months ago
https://cloud.google.com/kms/docs/key-rotation -> "If you suspect that a key version is compromised, disable it and revoke access to it as soon as possible" so A could be correct. do you agree?
upvoted 2 times
...
AwesomeGCP
2 years ago
Selected Answer: BD
B. Enable automatic key version rotation on a regular schedule. D. Limit the number of messages encrypted with each key version.
upvoted 3 times
...
GHOST1985
2 years ago
Selected Answer: BD
Answers BD
upvoted 1 times
...
parasthakur
2 years ago
Should be BD. A is wrong because there is no comprise happened as the question states "before an incident". As per document "Limiting the number of messages encrypted with the same key version helps prevent attacks enabled by cryptanalysis." https://cloud.google.com/kms/docs/key-rotation
upvoted 1 times
...
[Removed]
2 years, 1 month ago
Selected Answer: AB
should be AB.
upvoted 2 times
...

Question 137

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 137 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 137
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your company's chief information security officer (CISO) is requiring business data to be stored in specific locations due to regulatory requirements that affect the company's global expansion plans. After working on a plan to implement this requirement, you determine the following:
✑ The services in scope are included in the Google Cloud data residency requirements.
✑ The business data remains within specific locations under the same organization.
✑ The folder structure can contain multiple data residency locations.
✑ The projects are aligned to specific locations.
You plan to use the Resource Location Restriction organization policy constraint with very granular control. At which level in the hierarchy should you set the constraint?

  • A. Organization
  • B. Resource
  • C. Project
  • D. Folder
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Littleivy
Highly Voted 1 year, 11 months ago
Selected Answer: C
Need to be in project level to have required granularity
upvoted 5 times
...
Bettoxicity
Most Recent 6 months, 1 week ago
Selected Answer: D
D Why not C?: Project-level constraints might not offer sufficient granularity. You might have multiple projects within a region that require further segregation based on specific data residency demands.
upvoted 1 times
...
shayke
1 year, 9 months ago
Selected Answer: C
C- granular
upvoted 4 times
...
TonytheTiger
1 year, 10 months ago
on the exam
upvoted 4 times
...
AzureDP900
1 year, 11 months ago
D should be right , This is same as question 133
upvoted 4 times
AzureDP900
1 year, 11 months ago
sorry it is C
upvoted 3 times
...
...
AwesomeGCP
1 year, 11 months ago
Selected Answer: C
C. Project
upvoted 4 times
...
coco10k
1 year, 11 months ago
Selected Answer: C
most granular
upvoted 1 times
...
soltium
1 year, 12 months ago
I think its C. A and D will inherits the org policy which make it easier to manage, but the opposite of granular. For B, org policy cannot be applied to resource.
upvoted 1 times
...
TheBuckler
2 years ago
Answer is C. The key word here is "very granular* control". Most granular choice here is Project, as you cannot apply policy constraints to resources.
upvoted 3 times
...
GHOST1985
2 years, 1 month ago
Selected Answer: D
i Woul say D, Same question then 133, with new requirement for project is aligned for a specific location i think it is better to set up the restriction at higher level "Organization" so all the childrens (folders, projects) inherite the residency location restriction
upvoted 2 times
GHOST1985
2 years, 1 month ago
sorry i mean Answer A
upvoted 2 times
AzureDP900
1 year, 11 months ago
C is right
upvoted 1 times
AzureDP900
1 year, 11 months ago
D is right
upvoted 1 times
AzureDP900
1 year, 11 months ago
Sorry it is C
upvoted 1 times
...
...
...
...
...

Question 138

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 138 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 138
Topic #: 1
[All Professional Cloud Security Engineer Questions]

A database administrator notices malicious activities within their Cloud SQL instance. The database administrator wants to monitor the API calls that read the configuration or metadata of resources. Which logs should the database administrator review?

  • A. Admin Activity
  • B. System Event
  • C. Access Transparency
  • D. Data Access
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
GHOST1985
Highly Voted 1 year, 7 months ago
Selected Answer: D
answer D Data Access audit logs contain API calls that read the configuration or metadata of resources, as well as user-driven API calls that create, modify, or read user-provided resource data.
upvoted 9 times
AzureDP900
1 year, 5 months ago
D. Data Access
upvoted 2 times
...
...
KLei
Most Recent 3 months, 2 weeks ago
Selected Answer: D
https://cloud.google.com/logging/docs/audit/gsuite-audit-logging#log-types Data Access audit logs contain API calls that **read** the configuration or metadata of resources, as well as user-driven API calls that create, modify, or read user-provided resource data. Admin Activity audit logs contain log entries for API calls or other actions that **modify** the configuration or metadata of resources
upvoted 1 times
...
roycehaven
5 months ago
Its A Admin Activity audit logs contain log entries for API calls or other actions that modify the configuration or metadata of resources. For example, these logs record when users create VM instances or change Identity and Access Management permissions. Admin Activity audit logs are always written; you can't configure, exclude, or disable them. Even if you disable the Cloud Logging API, Admin Activity audit logs are still generated.
upvoted 2 times
...
AwesomeGCP
1 year, 6 months ago
Selected Answer: D
D. Data Access
upvoted 2 times
...
Random_Mane
1 year, 7 months ago
Selected Answer: D
D. https://cloud.google.com/logging/docs/audit/#data-access "Data Access audit logs contain API calls that read the configuration or metadata of resources, as well as user-driven API calls that create, modify, or read user-provided resource data."
upvoted 3 times
...
Baburao
1 year, 7 months ago
Should be D https://cloud.google.com/logging/docs/audit/#data-access
upvoted 2 times
...

Question 139

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 139 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 139
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are backing up application logs to a shared Cloud Storage bucket that is accessible to both the administrator and analysts. Analysts should not have access to logs that contain any personally identifiable information (PII). Log files containing PII should be stored in another bucket that is only accessible to the administrator. What should you do?

  • A. Upload the logs to both the shared bucket and the bucket with PII that is only accessible to the administrator. Use the Cloud Data Loss Prevention API to create a job trigger. Configure the trigger to delete any files that contain PII from the shared bucket.
  • B. On the shared bucket, configure Object Lifecycle Management to delete objects that contain PII.
  • C. On the shared bucket, configure a Cloud Storage trigger that is only triggered when PII is uploaded. Use Cloud Functions to capture the trigger and delete the files that contain PII.
  • D. Use Pub/Sub and Cloud Functions to trigger a Cloud Data Loss Prevention scan every time a file is uploaded to the administrator's bucket. If the scan does not detect PII, have the function move the objects into the shared Cloud Storage bucket.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
AzureDP900
Highly Voted 11 months, 1 week ago
D. Use Pub/Sub and Cloud Functions to trigger a Cloud Data Loss Prevention scan every time a file is uploaded to the administrator's bucket. If the scan does not detect PII, have the function move the objects into the shared Cloud Storage bucket
upvoted 8 times
...
jitu028
Highly Voted 1 year ago
Answer is D
upvoted 7 times
...
7f97f9f
Most Recent 1 month, 2 weeks ago
Selected Answer: A
A is correct. A. Ensures that PII is always stored securely and then removes PII from the less secure location. D is incorrect because the approach is overly complex and inefficient. It requires unnecessary data movement and processing. It also stores the files in the administrators bucket first, then moves them to the shared bucket. It is much better to have the files go to the correct bucket to begin with.
upvoted 1 times
...
TNT87
6 months, 4 weeks ago
Selected Answer: D
Answer D
upvoted 3 times
...
menbuk
8 months ago
Selected Answer: D
Answer is D
upvoted 2 times
...

Question 140

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 140 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 140
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You work for an organization in a regulated industry that has strict data protection requirements. The organization backs up their data in the cloud. To comply with data privacy regulations, this data can only be stored for a specific length of time and must be deleted after this specific period.
You want to automate the compliance with this regulation while minimizing storage costs. What should you do?

  • A. Store the data in a persistent disk, and delete the disk at expiration time.
  • B. Store the data in a Cloud Bigtable table, and set an expiration time on the column families.
  • C. Store the data in a BigQuery table, and set the table's expiration time.
  • D. Store the data in a Cloud Storage bucket, and configure the bucket's Object Lifecycle Management feature.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Baburao
Highly Voted 1 year, 7 months ago
should be D. To miminize costs, it's always GCS even though BQ comes as a close 2nd. But, since the question did not specify what kind of data it is (raw files vs tabular data), it is safe to assume GCS is the preferred option with LifeCycle enablement.
upvoted 9 times
...
gkarthik1919
Most Recent 6 months, 2 weeks ago
It must be D. Big Query cost is high when compare to storage bucket.
upvoted 2 times
...
GCBC
7 months, 2 weeks ago
Selected Answer: D
CLoud storage is the cheapest way to store
upvoted 3 times
...
TNT87
1 year ago
Selected Answer: D
Answer D
upvoted 3 times
...
TonytheTiger
1 year, 4 months ago
D is the answer. https://cloud.google.com/storage/docs/lifecycle
upvoted 4 times
...
AzureDP900
1 year, 5 months ago
D. Store the data in a Cloud Storage bucket, and configure the bucket's Object Lifecycle Management feature.
upvoted 1 times
...
zellck
1 year, 6 months ago
Selected Answer: D
D is the answer.
upvoted 1 times
...
GHOST1985
1 year, 6 months ago
Selected Answer: D
GCS is the preferred option with LifeCycle enablement.
upvoted 1 times
...

Question 141

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 141 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 141
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You have been tasked with configuring Security Command Center for your organization's Google Cloud environment. Your security team needs to receive alerts of potential crypto mining in the organization's compute environment and alerts for common Google Cloud misconfigurations that impact security. Which Security
Command Center features should you use to configure these alerts? (Choose two.)

  • A. Event Threat Detection
  • B. Container Threat Detection
  • C. Security Health Analytics
  • D. Cloud Data Loss Prevention
  • E. Google Cloud Armor
Show Suggested Answer Hide Answer
Suggested Answer: AC 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
zellck
Highly Voted 2 years ago
Selected Answer: AC
AC is the answer. https://cloud.google.com/security-command-center/docs/concepts-event-threat-detection-overview Event Threat Detection is a built-in service for the Security Command Center Premium tier that continuously monitors your organization and identifies threats within your systems in near-real time. https://cloud.google.com/security-command-center/docs/concepts-security-sources#security-health-analytics Security Health Analytics managed vulnerability assessment scanning for Google Cloud can automatically detect common vulnerabilities and misconfigurations across:
upvoted 11 times
...
TonytheTiger
Highly Voted 1 year, 10 months ago
on the exam
upvoted 5 times
...
dija123
Most Recent 6 months, 2 weeks ago
Selected Answer: AC
Agree with AC
upvoted 2 times
...
gkarthik1919
1 year ago
It must be AC
upvoted 1 times
...
TNT87
1 year, 6 months ago
Selected Answer: AC
Anaswer A, C
upvoted 1 times
...
AwesomeGCP
2 years ago
Selected Answer: AC
A. Event Threat Detection C. Security Health Analytics
upvoted 1 times
...
waikiki
2 years ago
Security Command Center and Google Cloud Armor are separate services. The question is asking about the functionality of the Security Command Center.
upvoted 1 times
...
Random_Mane
2 years, 1 month ago
Selected Answer: AC
A,C https://cloud.google.com/security-command-center/docs/concepts-security-command-center-overview
upvoted 1 times
...

Question 142

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 142 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 142
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You have noticed an increased number of phishing attacks across your enterprise user accounts. You want to implement the Google 2-Step Verification (2SV) option that uses a cryptographic signature to authenticate a user and verify the URL of the login page. Which Google 2SV option should you use?

  • A. Titan Security Keys
  • B. Google prompt
  • C. Google Authenticator app
  • D. Cloud HSM keys
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
TonytheTiger
Highly Voted 10 months ago
A. Titan Security Keys on the exam
upvoted 7 times
...
shayke
Most Recent 9 months, 2 weeks ago
Selected Answer: A
A is the right ans
upvoted 1 times
...
AzureDP900
11 months, 1 week ago
A. https://store.google.com/us/product/titan_security_key?pli=1&hl=en-US Provides phishing-resistant 2nd factor of authentication for high-value users. Works with many devices, browsers & services. Supports FIDO standards.
upvoted 3 times
...
AwesomeGCP
1 year ago
Selected Answer: A
A. Titan Security Keys
upvoted 3 times
...
zellck
1 year ago
Selected Answer: A
A is the answer. https://cloud.google.com/titan-security-key Security keys use public key cryptography to verify a user’s identity and URL of the login page ensuring attackers can’t access your account even if you are tricked into providing your username and password.
upvoted 4 times
...
GHOST1985
1 year ago
Selected Answer: A
Titan Security Key: Help prevent account takeovers from phishing attacks.
upvoted 1 times
...
[Removed]
1 year, 1 month ago
Selected Answer: A
agreed
upvoted 2 times
...
Random_Mane
1 year, 1 month ago
Selected Answer: A
A. "Security keys use public key cryptography to verify a user’s identity and URL of the login page ensuring attackers can’t access your account even if you are tricked into providing your username and password." https://cloud.google.com/titan-security-key https://qwiklabs.medium.com/two-factor-authentication-annoying-but-important-5fdb9e731868
upvoted 3 times
Arturo_Cloud
1 year, 1 month ago
I totally agree.
upvoted 3 times
...
...

Question 143

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 143 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 143
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization hosts a financial services application running on Compute Engine instances for a third-party company. The third-party company's servers that will consume the application also run on Compute Engine in a separate Google Cloud organization. You need to configure a secure network connection between the Compute Engine instances. You have the following requirements:
✑ The network connection must be encrypted.
✑ The communication between servers must be over private IP addresses.
What should you do?

  • A. Configure a Cloud VPN connection between your organization's VPC network and the third party's that is controlled by VPC firewall rules.
  • B. Configure a VPC peering connection between your organization's VPC network and the third party's that is controlled by VPC firewall rules.
  • C. Configure a VPC Service Controls perimeter around your Compute Engine instances, and provide access to the third party via an access level.
  • D. Configure an Apigee proxy that exposes your Compute Engine-hosted application as an API, and is encrypted with TLS which allows access only to the third party.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
lolanczos
1 month, 2 weeks ago
Selected Answer: B
B is correct because VPC peering establishes a private connection between VPC networks, allowing the Compute Engine instances to communicate using private IP addresses over Google’s encrypted backbone network. Option A (Cloud VPN) uses an encrypted tunnel but relies on public IP addresses; Option C (VPC Service Controls) is meant for securing service perimeters rather than direct network connectivity; and Option D (Apigee) is designed for API management, not for facilitating private network connections. Google Cloud. (n.d.). VPC Network Peering. Retrieved from https://cloud.google.com/vpc/docs/vpc-peering
upvoted 1 times
...
BPzen
4 months, 2 weeks ago
Selected Answer: A
Encrypted Network Connection: A Cloud VPN connection encrypts traffic between the two VPC networks using IPsec. This satisfies the requirement for encryption. Private IP Communication: Cloud VPN enables communication between the two VPC networks over private IP addresses by establishing a secure tunnel. Control via Firewall Rules: Both organizations can manage traffic using VPC firewall rules, providing granular control over allowed communication. Why Not the Other Options? B. Configure a VPC peering connection between your organization's VPC network and the third party's that is controlled by VPC firewall rules: VPC peering does not encrypt traffic between networks. It does not satisfy the requirement for encryption.
upvoted 2 times
...
aygitci
1 year, 6 months ago
Selected Answer: A
the traffic between the VPCs is not encrypted by default.
upvoted 1 times
ppandher
1 year, 5 months ago
It is encrypted by default at Network layer.
upvoted 2 times
...
...
desertlotus1211
1 year, 7 months ago
https://cloud.google.com/docs/security/encryption-in-transit#:~:text=All%20VM%2Dto%2DVM%20traffic,End%20(GFE)%20using%20TLS. All VM-to-VM traffic within a VPC network and peered VPC networks is encrypted. So for this fact and what I written below - Answer B.
upvoted 4 times
desertlotus1211
1 year, 7 months ago
Also ask for private IP communication, so technically no routing (policy or other) should be involved
upvoted 1 times
...
...
desertlotus1211
1 year, 7 months ago
So I think this question makes on sense... If it's server to server calls then TLS/HTTPS/SSL is being used. So the answer can be VPC Peering since the APIs are encrypted. It's poorly worded and you will use service accont any communications and calls. You can usd VPN, but you need a cloud router on both side, policy routing, etc. for the CEs to talk. Thoughts?
upvoted 1 times
desertlotus1211
1 year, 7 months ago
I meant to say NO sense....
upvoted 1 times
...
...
Kouuupobol
1 year, 10 months ago
Selected Answer: A
Answer is A, because it is explicitly said that trafic must be encrypted. Moreover, communication within the VPN use private IPs.
upvoted 3 times
deony
1 year, 10 months ago
i don't think that Cloud VPN use public IP, but encrypted. ref: https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview > Traffic traveling between the two networks is encrypted by one VPN gateway and then decrypted by the other VPN gateway. This action protects your data as it travels over the internet. but, with cloud interconnect, Cloud VPN can use private IP. i think it's too heavy works using VPN with cloud interconnect instead of using VPC peering.
upvoted 2 times
deony
1 year, 10 months ago
typo: i don't think -> i think
upvoted 1 times
...
...
...
TNT87
2 years ago
Selected Answer: B
Answer B
upvoted 1 times
...
alleinallein
2 years ago
Why not A? Any arguments?
upvoted 2 times
...
TonytheTiger
2 years, 4 months ago
B: https://cloud.google.com/vpc/docs/vpc-peering
upvoted 3 times
TonytheTiger
2 years, 4 months ago
Sorry - Ans C - Key point "separate Google Cloud Organization" Private Service Connect allows private consumption of services across VPC networks that belong to different groups, teams, projects, or organizations. https://cloud.google.com/vpc/docs/private-service-connect
upvoted 1 times
fad3r
2 years ago
You are right and wrong, You are right that yes Private Service Connect does indeed do this. You are wrong because that is not what C says. It says VPC Service Controls which is definitely wrong.
upvoted 1 times
...
...
...
Littleivy
2 years, 5 months ago
Selected Answer: B
B VPC Network Peering gives you several advantages over using external IP addresses or VPNs to connect networks https://cloud.google.com/vpc/docs/vpc-peering
upvoted 3 times
...
AzureDP900
2 years, 5 months ago
B. Configure a VPC peering connection between your organization's VPC network and the third party's that is controlled by VPC firewall rules.
upvoted 2 times
...
soltium
2 years, 6 months ago
A and B is correct, Cloud VPN are encrypted, VPC Peering might be unencrypted but this docs said it's encrypted. https://cloud.google.com/docs/security/encryption-in-transit#virtual_machine_to_virtual_machine
upvoted 3 times
...
AwesomeGCP
2 years, 6 months ago
Selected Answer: B
B. Configure a VPC peering connection between your organization's VPC network and the third party's that is controlled by VPC firewall rules.
upvoted 2 times
...
zellck
2 years, 6 months ago
Selected Answer: B
B is the answer.
upvoted 2 times
...
[Removed]
2 years, 6 months ago
Selected Answer: B
final B
upvoted 2 times
...
GHOST1985
2 years, 6 months ago
Selected Answer: B
Google encrypts and authenticates data in transit at one or more network layers when data moves outside physical boundaries not controlled by Google or on behalf of Google. All VM-to-VM traffic within a VPC network and peered VPC networks is encrypted. https://cloud.google.com/docs/security/encryption-in-transit#cio-level_summary
upvoted 4 times
...
[Removed]
2 years, 7 months ago
Selected Answer: A
sry A
upvoted 1 times
...

Question 144

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 144 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 144
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your company's new CEO recently sold two of the company's divisions. Your Director asks you to help migrate the Google Cloud projects associated with those divisions to a new organization node. Which preparation steps are necessary before this migration occurs? (Choose two.)

  • A. Remove all project-level custom Identity and Access Management (IAM) roles.
  • B. Disallow inheritance of organization policies.
  • C. Identify inherited Identity and Access Management (IAM) roles on projects to be migrated.
  • D. Create a new folder for all projects to be migrated.
  • E. Remove the specific migration projects from any VPC Service Controls perimeters and bridges.
Show Suggested Answer Hide Answer
Suggested Answer: CE 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Don10
Highly Voted 2 years, 6 months ago
Selected Answer: DE
D. https://cloud.google.com/resource-manager/docs/project-migration#import_export_folders E. https://cloud.google.com/resource-manager/docs/project-migration#vpcsc_security_perimeters
upvoted 11 times
...
marmar11111
Highly Voted 2 years, 4 months ago
Selected Answer: CD
https://cloud.google.com/resource-manager/docs/project-migration#plan_policy When you migrate your project, it will no longer inherit the policies from its current place in the resource hierarchy, and will be subject to the effective policy evaluation at its destination. We recommend making sure that the effective policies at the project's destination match as much as possible the policies that the project had in its source location. https://cloud.google.com/resource-manager/docs/project-migration#import_export_folders Policy inheritance can cause unintended effects when you are migrating a project, both in the source and destination organization resources. You can mitigate this risk by creating specific folders to hold only projects for export and import, and ensuring that the same policies are inherited by the folders in both organization resources. You can also set permissions on these folders that will be inherited to the projects moved within them, helping to accelerate the project migration process.
upvoted 7 times
...
BPzen
Most Recent 4 months, 1 week ago
Selected Answer: CE
IAM Role Inheritance: Projects inherit IAM roles from the organization or folder they belong to. When a project is moved to a new organization, these inherited roles are lost. Before migration, identify the inherited roles and reassign them explicitly at the project level if needed. VPC Service Controls Limitation: Projects in a VPC Service Controls perimeter or bridge cannot be moved between organizations. The perimeter must be updated to exclude the projects before migration. After the migration, you can reconfigure the projects to include them in a new or existing perimeter within the new organization.
upvoted 1 times
...
3574e4e
4 months, 3 weeks ago
Selected Answer: CE
C: Identity and Access Management policies and organization policies are inherited through the resource hierarchy, and can block a service from functioning if not set properly. Determine the effective policy at the project's destination in your resource hierarchy to ensure the policy aligns with your governance objectives. [https://cloud.google.com/resource-manager/docs/create-migration-plan#plan_policy] E: You cannot migrate a project that is protected by a VPC Service Controls security perimeter. [https://cloud.google.com/resource-manager/docs/handle-special-cases#vpcsc_security_perimeters] D is recommended but not mandatort [https://cloud.google.com/resource-manager/docs/create-migration-plan#import_export_folders]
upvoted 2 times
MoAk
4 months, 2 weeks ago
This is the way.
upvoted 1 times
...
...
3d9563b
8 months, 3 weeks ago
Selected Answer: CE
To prepare for migrating Google Cloud projects to a new organization node, you should identify inherited IAM roles on the projects to understand permission implications and remove the projects from any VPC Service Controls perimeters to avoid access issues during migration. These steps help ensure a smooth transition and maintain access control and security throughout the process.
upvoted 1 times
...
b6f53d8
1 year, 3 months ago
C&E in my opinion
upvoted 2 times
...
mjcts
1 year, 3 months ago
Selected Answer: CE
All the steps are relevant in some scenarios, but the most important 2 are C and E
upvoted 3 times
...
Crotofroto
1 year, 3 months ago
Selected Answer: CE
A. Removing all the project-level IAM will make you not know what permissions were there to be able to migrate them. B. Disallowing inheritance of organization policies will affect other projects. C. Identify inherited Identity and Access Management (IAM) roles on projects to be migrated. Correct, this will help you to migrate the IAM D. You don't need a new folder to migrate the projects E. Remove the specific migration projects from any VPC Service Controls perimeters and bridges. Correct, this is necessary because the project is no longer part of the organization.
upvoted 4 times
...
phd72
1 year, 4 months ago
A, C https://cloud.google.com/resource-manager/docs/handle-special-cases
upvoted 1 times
...
Xoxoo
1 year, 6 months ago
Selected Answer: CE
Before migrating Google Cloud projects associated with sold divisions to a new organization node, the following preparation steps are necessary: C. Identify inherited Identity and Access Management (IAM) roles on projects to be migrated: You should identify any IAM roles that are inherited by the projects you plan to migrate. This is important because you want to ensure that you understand the existing access controls and permissions associated with these projects. Identifying inherited IAM roles allows you to plan how to manage permissions during and after the migration. E. Remove the specific migration projects from any VPC Service Controls perimeters and bridges: If the projects you are migrating are currently part of any VPC Service Controls perimeters or bridges, you should remove them from these configurations. This ensures that the projects can be migrated without being restricted by VPC Service Controls, and it allows you to manage their access controls separately in the new organization node.
upvoted 2 times
...
ananta93
1 year, 7 months ago
Selected Answer: CE
The Answer is CE
upvoted 2 times
...
desertlotus1211
1 year, 7 months ago
https://cloud.google.com/resource-manager/docs/create-migration-plan I think the answer can be BCD... E is incorrect
upvoted 1 times
...
ymkk
1 year, 7 months ago
Selected Answer: CE
Because... A) Custom project roles can be re-granted after migration. B) Policy inheritance does not change after migration. D) A new folder is not required before migration.
upvoted 3 times
...
Simon6666
1 year, 7 months ago
Selected Answer: CD
CD is the ans
upvoted 1 times
...
[Removed]
1 year, 8 months ago
Selected Answer: DE
D, E D- Using import/export folders is recommended for mitigating policy risk. E- You cannot migrate a project that's in a VPC Service Controls perimeter References: https://cloud.google.com/resource-manager/docs/create-migration-plan#import_export_folders https://cloud.google.com/resource-manager/docs/handle-special-cases#vpcsc_security_perimeters
upvoted 3 times
...
gcpengineer
1 year, 10 months ago
Selected Answer: CE
CE is the ans
upvoted 4 times
...
xfall12
1 year, 10 months ago
A E https://cloud.google.com/resource-manager/docs/handle-special-cases
upvoted 2 times
...

Question 145

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 145 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 145
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are a consultant for an organization that is considering migrating their data from its private cloud to Google Cloud. The organization's compliance team is not familiar with Google Cloud and needs guidance on how compliance requirements will be met on Google Cloud. One specific compliance requirement is for customer data at rest to reside within specific geographic boundaries. Which option should you recommend for the organization to meet their data residency requirements on Google Cloud?

  • A. Organization Policy Service constraints
  • B. Shielded VM instances
  • C. Access control lists
  • D. Geolocation access controls
  • E. Google Cloud Armor
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
[Removed]
Highly Voted 1 year, 7 months ago
Selected Answer: A
https://cloud.google.com/blog/products/identity-security/meet-data-residency-requirements-with-google-cloud
upvoted 6 times
...
Xoxoo
Most Recent 6 months, 3 weeks ago
Selected Answer: A
To meet the data residency requirements on Google Cloud, you can use Organization Policy Service constraints . This allows you to limit the physical location of a new resource with the Organization Policy Service resource locations constraint . You can use the location property of a resource to identify where it is deployed and maintained by the service. For data-containing resources of some Google Cloud services, this property also reflects the location where data is stored . This constraint allows you to define the allowed Google Cloud locations where the resources for supported services in your hierarchy can be created . After you define resource locations, this limitation will apply only to newly-created resources. Resources you created before setting the resource locations constraint will continue to exist and perform their function . Therefore, option A is the correct answer.
upvoted 3 times
...
desertlotus1211
7 months, 1 week ago
https://cloud.google.com/blog/products/identity-security/meet-data-residency-requirements-with-google-cloud putting back at the top for others
upvoted 1 times
...
AwesomeGCP
1 year, 6 months ago
Selected Answer: A
A. Organization Policy Service constraints
upvoted 3 times
...
rrvv
1 year, 7 months ago
A. Organization Policy Service constraints to add org policy for Resource Location Restriction https://cloud.google.com/resource-manager/docs/organization-policy/using-constraints#list-constraint
upvoted 4 times
AzureDP900
1 year, 5 months ago
yes A. is right. Organization Policy Service constraints
upvoted 1 times
...
...

Question 146

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 146 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 146
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your security team wants to reduce the risk of user-managed keys being mismanaged and compromised. To achieve this, you need to prevent developers from creating user-managed service account keys for projects in their organization. How should you enforce this?

  • A. Configure Secret Manager to manage service account keys.
  • B. Enable an organization policy to disable service accounts from being created.
  • C. Enable an organization policy to prevent service account keys from being created.
  • D. Remove the iam.serviceAccounts.getAccessToken permission from users.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
AwesomeGCP
1 year, 6 months ago
Selected Answer: C
C. Enable an organization policy to prevent service account keys from being created.
upvoted 3 times
...
Random_Mane
1 year, 7 months ago
Selected Answer: C
C. https://cloud.google.com/iam/docs/best-practices-for-managing-service-account-keys "To prevent unnecessary usage of service account keys, use organization policy constraints: At the root of your organization's resource hierarchy, apply the Disable service account key creation and Disable service account key upload constraints to establish a default where service account keys are disallowed. When needed, override one of the constraints for selected projects to re-enable service account key creation or upload."
upvoted 4 times
AzureDP900
1 year, 5 months ago
Yes, You are right Enable an organization policy to prevent service account keys from being created.
upvoted 1 times
...
desertlotus1211
7 months, 1 week ago
Your answer represents Answer B: to Disable sevice account key creation
upvoted 1 times
desertlotus1211
7 months, 1 week ago
Sorry it says service account NOT SA keys... Answer C
upvoted 2 times
...
...
...
Baburao
1 year, 7 months ago
C seems to be a correct option but there must be an exclusion for CI/CD pipelines or SuperAdmins/OrgAdmins. Otherwise, nobody will be able to create ServiceAccount Keys.
upvoted 4 times
...

Question 147

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 147 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 147
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are responsible for managing your company's identities in Google Cloud. Your company enforces 2-Step Verification (2SV) for all users. You need to reset a user's access, but the user lost their second factor for 2SV. You want to minimize risk. What should you do?

  • A. On the Google Admin console, select the appropriate user account, and generate a backup code to allow the user to sign in. Ask the user to update their second factor.
  • B. On the Google Admin console, temporarily disable the 2SV requirements for all users. Ask the user to log in and add their new second factor to their account. Re-enable the 2SV requirement for all users.
  • C. On the Google Admin console, select the appropriate user account, and temporarily disable 2SV for this account. Ask the user to update their second factor, and then re-enable 2SV for this account.
  • D. On the Google Admin console, use a super administrator account to reset the user account's credentials. Ask the user to update their credentials after their first login.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
zellck
Highly Voted 2 years, 6 months ago
Selected Answer: A
A is the answer. https://support.google.com/a/answer/9176734 Use backup codes for account recovery If you need to recover an account, use backup codes. Accounts are still protected by 2-Step Verification, and backup codes are easy to generate.
upvoted 6 times
AzureDP900
2 years, 5 months ago
.Agreed, On the Google Admin console, select the appropriate user account, and generate a backup code to allow the user to sign in. Ask the user to update their second factor.
upvoted 3 times
...
...
BPzen
Most Recent 4 months, 1 week ago
Selected Answer: A
Account Remains Protected by 2SV: Backup codes act as a temporary second factor, ensuring the account stays protected by 2SV even during the recovery process.
upvoted 1 times
...
AwesomeGCP
2 years, 6 months ago
Selected Answer: A
A. On the Google Admin console, select the appropriate user account, and generate a backup code to allow the user to sign in. Ask the user to update their second factor.
upvoted 4 times
...
Random_Mane
2 years, 7 months ago
Selected Answer: A
A. https://support.google.com/a/answer/9176734?hl=en
upvoted 4 times
...

Question 148

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 148 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 148
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Which Google Cloud service should you use to enforce access control policies for applications and resources?

  • A. Identity-Aware Proxy
  • B. Cloud NAT
  • C. Google Cloud Armor
  • D. Shielded VMs
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Random_Mane
Highly Voted 7 months, 1 week ago
Selected Answer: A
A. https://cloud.google.com/iap/docs/concepts-overview "Use IAP when you want to enforce access control policies for applications and resources."
upvoted 5 times
...
AzureDP900
Most Recent 5 months, 1 week ago
A. Identity-Aware Proxy
upvoted 2 times
...
AwesomeGCP
6 months ago
Selected Answer: A
A. Identity-Aware Proxy
upvoted 2 times
...

Question 149

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 149 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 149
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You want to update your existing VPC Service Controls perimeter with a new access level. You need to avoid breaking the existing perimeter with this change, and ensure the least disruptions to users while minimizing overhead. What should you do?

  • A. Create an exact replica of your existing perimeter. Add your new access level to the replica. Update the original perimeter after the access level has been vetted.
  • B. Update your perimeter with a new access level that never matches. Update the new access level to match your desired state one condition at a time to avoid being overly permissive.
  • C. Enable the dry run mode on your perimeter. Add your new access level to the perimeter configuration. Update the perimeter configuration after the access level has been vetted.
  • D. Enable the dry run mode on your perimeter. Add your new access level to the perimeter dry run configuration. Update the perimeter configuration after the access level has been vetted.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
zellck
Highly Voted 2 years ago
Selected Answer: D
D is the answer. https://cloud.google.com/vpc-service-controls/docs/dry-run-mode When using VPC Service Controls, it can be difficult to determine the impact to your environment when a service perimeter is created or modified. With dry run mode, you can better understand the impact of enabling VPC Service Controls and changes to perimeters in existing environments.
upvoted 6 times
AzureDP900
1 year, 11 months ago
D. Enable the dry run mode on your perimeter. Add your new access level to the perimeter dry run configuration. Update the perimeter configuration after the access level has been vetted.
upvoted 1 times
...
...
Baburao
Highly Voted 2 years, 1 month ago
D seems to be correct. https://cloud.google.com/vpc-service-controls/docs/manage-dry-run-configurations#updating_a_dry_run_configuration
upvoted 5 times
...
desertlotus1211
Most Recent 8 months ago
Answers are BOTH C&D... The problem I have is that both answers say the same thing...why such a question.
upvoted 1 times
...
AwesomeGCP
2 years ago
Selected Answer: D
D. Enable the dry run mode on your perimeter. Add your new access level to the perimeter dry run configuration. Update the perimeter configuration after the access level has been vetted.
upvoted 3 times
...

Question 150

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 150 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 150
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization's Google Cloud VMs are deployed via an instance template that configures them with a public IP address in order to host web services for external users. The VMs reside in a service project that is attached to a host (VPC) project containing one custom Shared VPC for the VMs. You have been asked to reduce the exposure of the VMs to the internet while continuing to service external users. You have already recreated the instance template without a public IP address configuration to launch the managed instance group (MIG). What should you do?

  • A. Deploy a Cloud NAT Gateway in the service project for the MIG.
  • B. Deploy a Cloud NAT Gateway in the host (VPC) project for the MIG.
  • C. Deploy an external HTTP(S) load balancer in the service project with the MIG as a backend.
  • D. Deploy an external HTTP(S) load balancer in the host (VPC) project with the MIG as a backend.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Littleivy
Highly Voted 2 years, 5 months ago
Selected Answer: C
Answer is C NAT is for egress. To serve customers, need to have LB in the same project
upvoted 14 times
...
GHOST1985
Highly Voted 2 years, 5 months ago
Selected Answer: C
No doubt the answer is C, this is the Two-tier web service model , below the example from google cloud documentation https://cloud.google.com/vpc/docs/shared-vpc#two-tier_web_service
upvoted 7 times
...
LaithTech
Most Recent 8 months ago
Selected Answer: D
Based on the network architecture and best practices for managing resources in a Shared VPC environment. Answer is D
upvoted 1 times
...
winston9
1 year, 2 months ago
Selected Answer: C
using an external HTTP(S) load balancer deployed within the service project, where the VMs reside, offers the most secure, efficient, and organizationally aligned solution for achieving your objective of minimizing internet exposure while maintaining external user access to your web services.
upvoted 2 times
...
gical
1 year, 3 months ago
Answer is C. https://cloud.google.com/load-balancing/docs/https#shared-vpc For the Application Load Balancer: "The regional external IP address, the forwarding rule, the target HTTP(S) proxy, and the associated URL map must be defined in the same project. This project can be the host project or a service project." The question is mentioning "VMs reside in a service project" and "have been asked to reduce the exposure of the VMs"
upvoted 2 times
...
TNT87
2 years ago
https://cloud.google.com/architecture/building-internet-connectivity-for-private-vms#objectives
upvoted 1 times
...
fad3r
2 years ago
The people who think it is cloud nat really do not have a fundamental grasp on how networking / natting actually work
upvoted 2 times
...
shayke
2 years, 3 months ago
Selected Answer: C
C is the right ans
upvoted 2 times
...
AzureDP900
2 years, 5 months ago
B. Deploy a Cloud NAT Gateway in the host (VPC) project for the MIG.
upvoted 1 times
GHOST1985
2 years, 5 months ago
How Cloud NAT could be able to expose internal IP to the public users !! please refers to the documentation before ansewring ! https://cloud.google.com/nat/docs/overview
upvoted 3 times
AzureDP900
2 years, 5 months ago
Thank you for sharing link, I am changing it to C
upvoted 1 times
...
...
...
coco10k
2 years, 5 months ago
Selected Answer: C
recently support for host project LBs was introduced but usually the LB stays with the backend services in the service project. so answer C
upvoted 4 times
asdf12345678
2 years, 5 months ago
the official doc still does not support frontend / backend of global https LB in different projects. so +1 to C (https://cloud.google.com/load-balancing/docs/features#network_topologies)
upvoted 1 times
...
...
Table2022
2 years, 5 months ago
Answer is C, The first example creates all of the load balancer components and backends in the service project. https://cloud.google.com/load-balancing/docs/https/setting-up-reg-ext-shared-vpc
upvoted 1 times
...
crisyeb
2 years, 5 months ago
Selected Answer: C
For me C is the answer. Cloud NAT is for outbound traffic and LB is to handle external customers' request to web services, so it is a LB. Between C and D: In this documentation https://cloud.google.com/load-balancing/docs/https#shared-vpc it says that "The global external IP address, the forwarding rule, the target HTTP(S) proxy, and the associated URL map must be defined in the same service project as the backends." and in the statement it says that the MIG are in the service project, so in my opinion the LB components must be in the service project.
upvoted 5 times
...
rotorclear
2 years, 6 months ago
Selected Answer: D
NAT is for outbound while the requirement is to serve external customers who will consume web service. Hence the choice is a LB not NAT
upvoted 2 times
...
soltium
2 years, 6 months ago
C is the answer. A B Cloud NAT only handle outbound connection from the VM to internet. D I'm pretty sure you can't select the service project's MIG as backend when creating LB on the host.
upvoted 1 times
...
AwesomeGCP
2 years, 6 months ago
Selected Answer: B
B. Deploy a Cloud NAT Gateway in the host (VPC) project for the MIG.
upvoted 1 times
...
zellck
2 years, 6 months ago
Selected Answer: D
D is the answer. https://cloud.google.com/load-balancing/docs/https#shared-vpc While you can create all the load balancing components and backends in the Shared VPC host project, this model does not separate network administration and service development responsibilities.
upvoted 5 times
...
rrvv
2 years, 7 months ago
In shared VPC design, it is possible to create a separate NAT gateway in the service project however as per the best practices, a regional NAT gateway should be created in the host project for each regional subnet/network which is being extended to the attached service projects. Hence I will opt for option B
upvoted 1 times
GHOST1985
2 years, 6 months ago
the requirement says : "while continuing to service external users" , Cloud NAT does not expose service to external users, Cloud NAT is only used for internet outbound so Answer C is the best Answer
upvoted 3 times
...
...

Question 151

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 151 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 151
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your privacy team uses crypto-shredding (deleting encryption keys) as a strategy to delete personally identifiable information (PII). You need to implement this practice on Google Cloud while still utilizing the majority of the platform's services and minimizing operational overhead. What should you do?

  • A. Use client-side encryption before sending data to Google Cloud, and delete encryption keys on-premises.
  • B. Use Cloud External Key Manager to delete specific encryption keys.
  • C. Use customer-managed encryption keys to delete specific encryption keys.
  • D. Use Google default encryption to delete specific encryption keys.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Random_Mane
Highly Voted 6 months, 3 weeks ago
Selected Answer: C
C. https://cloud.google.com/sql/docs/mysql/cmek "You might have situations where you want to permanently destroy data encrypted with CMEK. To do this, you destroy the customer-managed encryption key version. You can't destroy the keyring or key, but you can destroy key versions of the key."
upvoted 11 times
...
AzureDP900
Most Recent 5 months, 1 week ago
C is right
upvoted 2 times
...
rotorclear
6 months ago
Selected Answer: C
CMEK allows users to manage their keys on google without operation overhead of managing keys externally
upvoted 4 times
...
AwesomeGCP
6 months ago
Selected Answer: C
C. Use customer-managed encryption keys to delete specific encryption keys.
upvoted 2 times
...
zellck
6 months, 2 weeks ago
Selected Answer: C
C is the answer to minimise operational overhead.
upvoted 3 times
...

Question 152

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 152 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 152
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You need to centralize your team's logs for production projects. You want your team to be able to search and analyze the logs using Logs Explorer. What should you do?

  • A. Enable Cloud Monitoring workspace, and add the production projects to be monitored.
  • B. Use Logs Explorer at the organization level and filter for production project logs.
  • C. Create an aggregate org sink at the parent folder of the production projects, and set the destination to a Cloud Storage bucket.
  • D. Create an aggregate org sink at the parent folder of the production projects, and set the destination to a logs bucket.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
soltium
Highly Voted 1 year, 6 months ago
D because in C we can't use logs explorer to read data from a bucket.
upvoted 8 times
...
AwesomeGCP
Highly Voted 1 year, 6 months ago
Selected Answer: D
D. Create an aggregate org sink at the parent folder of the production projects, and set the destination to a logs bucket.
upvoted 7 times
...
Andrei_Z
Most Recent 7 months, 1 week ago
Selected Answer: A
The answer is A because you want to search and analyze logs using Logs Explorer
upvoted 2 times
Andrei_Z
7 months, 1 week ago
nevermind, I forgot Cloud Monitoring only monitors your resources and doesn't analyze logs
upvoted 2 times
...
...
Bill1000
1 year, 6 months ago
C is the answer .
upvoted 1 times
...
zellck
1 year, 6 months ago
Selected Answer: D
D is the answer. https://cloud.google.com/logging/docs/export/aggregated_sinks#supported-destinations You can use aggregated sinks to route logs within or between the same organizations and folders to the following destinations: - Another Cloud Logging bucket: Log entries held in Cloud Logging log buckets.
upvoted 4 times
AzureDP900
1 year, 5 months ago
Agree with you, D is right
upvoted 1 times
...
TNT87
1 year ago
What is this link for? it supports C as well. The point is we cant use logs explorer on Cloud storage....Thats what makes D the answer
upvoted 1 times
...
...
Random_Mane
1 year, 7 months ago
Selected Answer: D
D. https://cloud.google.com/logging/docs/central-log-storage
upvoted 3 times
GHOST1985
1 year, 6 months ago
what this link is for ? ?
upvoted 1 times
...
...

Question 153

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 153 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 153
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You need to use Cloud External Key Manager to create an encryption key to encrypt specific BigQuery data at rest in Google Cloud. Which steps should you do first?

  • A. 1. Create or use an existing key with a unique uniform resource identifier (URI) in your Google Cloud project. 2. Grant your Google Cloud project access to a supported external key management partner system.
  • B. 1. Create or use an existing key with a unique uniform resource identifier (URI) in Cloud Key Management Service (Cloud KMS). 2. In Cloud KMS, grant your Google Cloud project access to use the key.
  • C. 1. Create or use an existing key with a unique uniform resource identifier (URI) in a supported external key management partner system. 2. In the external key management partner system, grant access for this key to use your Google Cloud project.
  • D. 1. Create an external key with a unique uniform resource identifier (URI) in Cloud Key Management Service (Cloud KMS). 2. In Cloud KMS, grant your Google Cloud project access to use the key.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
zellck
Highly Voted 1 year ago
Selected Answer: C
C is the answer. https://cloud.google.com/kms/docs/ekm#how_it_works - First, you create or use an existing key in a supported external key management partner system. This key has a unique URI or key path. - Next, you grant your Google Cloud project access to use the key, in the external key management partner system. - In your Google Cloud project, you create a Cloud EKM key, using the URI or key path for the externally-managed key.
upvoted 11 times
AzureDP900
11 months, 1 week ago
Thank you for detailed explanation, I agree with you
upvoted 1 times
...
...
TNT87
Most Recent 6 months, 1 week ago
Selected Answer: C
This section provides a broad overview of how Cloud EKM works with an external key. You can also follow the step-by-step instructions to create a Cloud EKM key accessed via the internet or via a VPC. 1.First, you create or use an existing key in a supported external key management partner system. This key has a unique URI or key path. 2. Next, you grant your Google Cloud project access to use the key, in the external key management partner system. 3. In your Google Cloud project, you create a Cloud EKM key, using the URI or key path for the externally managed key. https://cloud.google.com/kms/docs/ekm#how_it_works
upvoted 3 times
...
erfg
9 months, 2 weeks ago
C is the answer
upvoted 1 times
...
AwesomeGCP
1 year ago
Selected Answer: C
C. 1. Create or use an existing key with a unique uniform resource identifier (URI) in a supported external key management partner system. 2. In the external key management partner system, grant access for this key to use your Google Cloud project.
upvoted 4 times
...
Baburao
1 year, 1 month ago
C seems to be correct option. https://cloud.google.com/kms/docs/ekm#how_it_works
upvoted 3 times
...

Question 154

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 154 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 154
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your company's cloud security policy dictates that VM instances should not have an external IP address. You need to identify the Google Cloud service that will allow VM instances without external IP addresses to connect to the internet to update the VMs. Which service should you use?

  • A. Identity Aware-Proxy
  • B. Cloud NAT
  • C. TCP/UDP Load Balancing
  • D. Cloud DNS
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Random_Mane
Highly Voted 1 year, 1 month ago
Selected Answer: B
B https://cloud.google.com/nat/docs/overview "Cloud NAT (network address translation) lets certain resources without external IP addresses create outbound connections to the internet."
upvoted 6 times
...
pedrojorge
Most Recent 8 months, 2 weeks ago
Selected Answer: B
Cloud NAT to control egress traffic.
upvoted 2 times
...
samuelmorher
9 months, 3 weeks ago
Selected Answer: B
https://cloud.google.com/nat/docs/overview
upvoted 1 times
...
AzureDP900
11 months, 1 week ago
Cloud NAT is right B
upvoted 1 times
...
AwesomeGCP
1 year ago
Selected Answer: B
B. Cloud NAT
upvoted 3 times
...

Question 155

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 155 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 155
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You want to make sure that your organization's Cloud Storage buckets cannot have data publicly available to the internet. You want to enforce this across all
Cloud Storage buckets. What should you do?

  • A. Remove Owner roles from end users, and configure Cloud Data Loss Prevention.
  • B. Remove Owner roles from end users, and enforce domain restricted sharing in an organization policy.
  • C. Configure uniform bucket-level access, and enforce domain restricted sharing in an organization policy.
  • D. Remove *.setIamPolicy permissions from all roles, and enforce domain restricted sharing in an organization policy.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
GHOST1985
Highly Voted 1 year ago
Selected Answer: C
- Uniform bucket-level access: https://cloud.google.com/storage/docs/uniform-bucket-level-access#should-you-use - Domain Restricted Sharing: https://cloud.google.com/resource-manager/docs/organization-policy/restricting-domains#public_data_sharing
upvoted 5 times
...
samuelmorher
Most Recent 9 months, 3 weeks ago
Selected Answer: C
It's C
upvoted 1 times
...
AzureDP900
11 months, 1 week ago
I agree with C
upvoted 1 times
...
AwesomeGCP
1 year ago
Selected Answer: C
C. Configure uniform bucket-level access, and enforce domain restricted sharing in an organization policy.
upvoted 2 times
...
zellck
1 year ago
Selected Answer: C
C is the answer.
upvoted 3 times
...

Question 156

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 156 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 156
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your company plans to move most of its IT infrastructure to Google Cloud. They want to leverage their existing on-premises Active Directory as an identity provider for Google Cloud. Which two steps should you take to integrate the company's on-premises Active Directory with Google Cloud and configure access management? (Choose two.)

  • A. Use Identity Platform to provision users and groups to Google Cloud.
  • B. Use Cloud Identity SAML integration to provision users and groups to Google Cloud.
  • C. Install Google Cloud Directory Sync and connect it to Active Directory and Cloud Identity.
  • D. Create Identity and Access Management (IAM) roles with permissions corresponding to each Active Directory group.
  • E. Create Identity and Access Management (IAM) groups with permissions corresponding to each Active Directory group.
Show Suggested Answer Hide Answer
Suggested Answer: CD 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
GHOST1985
Highly Voted 2 years, 6 months ago
Selected Answer: CE
https://cloud.google.com/architecture/identity/federating-gcp-with-active-directory-synchronizing-user-accounts?hl=en https://cloud.google.com/architecture/identity/federating-gcp-with-active-directory-synchronizing-user-accounts?hl=en#deciding_where_to_deploy_gcds
upvoted 9 times
Test114
2 years, 6 months ago
How about BE? https://cloud.google.com/architecture/identity/federating-gcp-with-active-directory-introduction "Single sign-on: Whenever a user needs to authenticate, Google Cloud delegates the authentication to Active Directory by using the Security Assertion Markup Language (SAML) protocol."
upvoted 1 times
zellck
2 years, 6 months ago
SAML is used for authentication, not provisioning.
upvoted 4 times
...
...
AzureDP900
2 years, 5 months ago
CE sounds good
upvoted 2 times
...
...
AwesomeGCP
Highly Voted 2 years, 6 months ago
Selected Answer: CE
C. Install Google Cloud Directory Sync and connect it to Active Directory and Cloud Identity. E. Create Identity and Access Management (IAM) groups with permissions corresponding to each Active Directory group.
upvoted 7 times
...
BPzen
Most Recent 4 months, 1 week ago
Selected Answer: CE
Google Cloud Directory Sync (GCDS): Synchronizes user and group data from on-premises Active Directory to Cloud Identity, which is essential for enabling Active Directory as an identity provider. IAM Groups: Google Cloud IAM groups allow permissions to be managed collectively for a group of users. By aligning IAM groups with Active Directory groups, you can streamline access management across Google Cloud resources.
upvoted 2 times
...
BPzen
4 months, 2 weeks ago
Selected Answer: CE
To integrate on-premises Active Directory with Google Cloud for identity and access management, you need to synchronize your Active Directory users and groups with Google Cloud and map them to appropriate IAM permissions. C. Install Google Cloud Directory Sync and connect it to Active Directory and Cloud Identity. Google Cloud Directory Sync (GCDS) is used to synchronize users and groups from an on-premises Active Directory to Cloud Identity or Google Workspace. This ensures that user accounts and group memberships in Google Cloud mirror the structure of your Active Directory. E. Create Identity and Access Management (IAM) groups with permissions corresponding to each Active Directory group. After synchronizing groups from Active Directory to Google Cloud, you create IAM groups in Google Cloud and assign the appropriate permissions. Using IAM groups simplifies access control by allowing permissions to be managed at the group level instead of the user level.
upvoted 1 times
...
Roro_Brother
11 months, 1 week ago
Selected Answer: CD
GCDS is already creating the groups automatically. We need to create the IAM roles to assign to those groups. So D, not E
upvoted 2 times
...
Bettoxicity
1 year ago
Selected Answer: CD
CD Why not E?: IAM groups in Google Cloud are separate entities from IAM roles. While you could create IAM groups that mirror Active Directory groups, directly mapping permissions to IAM roles based on the corresponding Active Directory groups offers a more efficient and granular approach to access control.
upvoted 2 times
...
glb2
1 year ago
Selected Answer: CD
Answer is C and D.
upvoted 2 times
...
PTC231
1 year, 1 month ago
ANSWER C and E C. Install Google Cloud Directory Sync and connect it to Active Directory and Cloud Identity: Google Cloud Directory Sync (GCDS) is used to synchronize user and group information from on-premises Active Directory to Google Cloud Identity. This step ensures that user and group information is consistent across both environments. E. Create Identity and Access Management (IAM) groups with permissions corresponding to each Active Directory group: Once the synchronization is set up, you can create IAM groups in Google Cloud that mirror the Active Directory groups. Assign permissions to these IAM groups based on the roles and access levels required for each group. This approach simplifies access management by aligning Google Cloud permissions with existing Active Directory groups.
upvoted 2 times
...
PhuocT
1 year, 1 month ago
Selected Answer: CD
C and D I think, we don't need to create group, as it will be synced from AD, we only need to focus on creating the role for the group.
upvoted 3 times
...
desertlotus1211
1 year, 2 months ago
Answers: B & C... There is NO such thing as IAM groups in GCP
upvoted 1 times
...
mjcts
1 year, 2 months ago
Selected Answer: CD
GCDS is already creating the groups automatically. We need to create the IAM roles to assign to those groups. So D, not E
upvoted 3 times
...
[Removed]
1 year, 3 months ago
Bard says CE. User and Groups are already imported with GCDS, so you need to focus on creating roles
upvoted 1 times
...
aygitci
1 year, 6 months ago
Selected Answer: CD
Not Ek as the groups are already synced and retrieved, so roles will be attached to them
upvoted 6 times
...
gkarthik1919
1 year, 6 months ago
CE are seems to be coorect. B is required only for SSO. GCDS would also provision user and group.
upvoted 1 times
...
Mithung30
1 year, 8 months ago
Selected Answer: CD
CD is correct
upvoted 4 times
...
a190d62
1 year, 8 months ago
Selected Answer: CD
There is a possibility to synchronize groups between AD and Google Cloud so why not to use it and focus on creating roles https://cloud.google.com/architecture/identity/federating-gcp-with-active-directory-introduction?hl=en#mapping_groups
upvoted 3 times
...
tauseef71
2 years, 1 month ago
CD is the right answer. C> sync with AD user and groups ; D> give users and groups the roles in IAM.
upvoted 4 times
...

Question 157

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 157 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 157
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are in charge of creating a new Google Cloud organization for your company. Which two actions should you take when creating the super administrator accounts? (Choose two.)

  • A. Create an access level in the Google Admin console to prevent super admin from logging in to Google Cloud.
  • B. Disable any Identity and Access Management (IAM) roles for super admin at the organization level in the Google Cloud Console.
  • C. Use a physical token to secure the super admin credentials with multi-factor authentication (MFA).
  • D. Use a private connection to create the super admin accounts to avoid sending your credentials over the Internet.
  • E. Provide non-privileged identities to the super admin users for their day-to-day activities.
Show Suggested Answer Hide Answer
Suggested Answer: CE 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Baburao
Highly Voted 2 years, 1 month ago
I think CE makes a better option. See documentation below: https://cloud.google.com/resource-manager/docs/super-admin-best-practices
upvoted 9 times
...
gkarthik1919
Most Recent 1 year ago
CE are right answer.
upvoted 1 times
...
alleinallein
1 year, 6 months ago
Why E?
upvoted 1 times
shanwford
5 months, 3 weeks ago
The super-admin users should not do their daily business as admin. Best practise is to use different accounts that only have limited rights (least privilleg).
upvoted 1 times
...
...
samuelmorher
1 year, 9 months ago
Selected Answer: CE
it's CE
upvoted 2 times
...
AzureDP900
1 year, 11 months ago
CE is good
upvoted 2 times
...
AwesomeGCP
2 years ago
Selected Answer: CE
C. Use a physical token to secure the super admin credentials with multi-factor authentication (MFA). E. Provide non-privileged identities to the super admin users for their day-to-day activities.
upvoted 4 times
...
zellck
2 years ago
Selected Answer: CE
CE is the answer. https://cloud.google.com/resource-manager/docs/super-admin-best-practices#discourage_super_admin_account_usage - Use a security key or other physical authentication device to enforce two-step verification - Give super admins a separate account that requires a separate login
upvoted 2 times
AzureDP900
1 year, 11 months ago
Thanks
upvoted 1 times
...
...

Question 158

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 158 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 158
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are deploying a web application hosted on Compute Engine. A business requirement mandates that application logs are preserved for 12 years and data is kept within European boundaries. You want to implement a storage solution that minimizes overhead and is cost-effective. What should you do?

  • A. Create a Cloud Storage bucket to store your logs in the EUROPE-WEST1 region. Modify your application code to ship logs directly to your bucket for increased efficiency.
  • B. Configure your Compute Engine instances to use the Google Cloud's operations suite Cloud Logging agent to send application logs to a custom log bucket in the EUROPE-WEST1 region with a custom retention of 12 years.
  • C. Use a Pub/Sub topic to forward your application logs to a Cloud Storage bucket in the EUROPE-WEST1 region.
  • D. Configure a custom retention policy of 12 years on your Google Cloud's operations suite log bucket in the EUROPE-WEST1 region.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
tangac
Highly Voted 2 years, 7 months ago
The A and the C are the two possible (12 years retention is not possible with Cloud Logging...max 3650 days) so now the question is...pub/sub or not pub/sub.... in my opinion when it's said...limit overhead, i should go with the A....but not really sure
upvoted 14 times
mohomad7
2 years ago
https://cloud.google.com/logging/docs/buckets#custom-retention Cloud Logging max 3650 days
upvoted 5 times
...
meh009
2 years, 4 months ago
Correct. Tested and can verify this. Between A and C. and I would choose A.
upvoted 2 times
giu2301
1 year, 12 months ago
re-writing code is never the best answer ihmo. Why not use pub/sub? We do that for any 3rd party app. I'm positively sure that B and D are wrong. Still thinking which one would have the least operational overhead between A and C.
upvoted 2 times
...
...
[Removed]
1 year, 8 months ago
With "C" you're forwarding logs which means you either have two copies (if you're forwarding without deleting original) or best case, you have an intermediate step/hop. Whereas with "A", the app is writing directly to the bucket in Europe so only one copy guaranteed and one journey from app to storage instead of going through an intermediate steps. So "A" is less overhead.
upvoted 2 times
...
...
GHOST1985
Highly Voted 2 years, 7 months ago
Selected Answer: B
A: Google recommand to avoid developping new code while it propose service for that => incorrect B: seem to reponse for this needs => correct C: Pub/sub is not using for forwarding log, it is an event notification, and no configuration for the retention 12 years is proposed => incorrect D: how the application will forward the logs to the bucket ? => incorrect
upvoted 10 times
KLei
3 months, 2 weeks ago
Seems there is a limitation of retention period for the Google Log Buckets. So A is the correct answer https://cloud.google.com/logging/docs/buckets#create_bucket Optional: To set a custom retention period for the logs in the bucket, click Next. In the Retention period field, enter the number of days, between 1 day and **3650 days**, that you want Cloud Logging to retain your logs. If you don't customize the retention period, the default is 30 days.
upvoted 1 times
...
...
YourFriendlyNeighborhoodSpider
Most Recent 3 weeks, 5 days ago
Selected Answer: B
A cannot be correct, in the question you see that "12 years retention" is a MANDATORY REQUIREMENT. -> People in the comment complain that maximum is 3650 days (10 years), sure, not 12 years, but DEFAULT RETENTION IS 30 DAYS IF YOU GO WITH OPTION A, SO DEFINITELY NOT THE CORRECT ONE, SO I RATHER GO WITH B AND SAVE MYSELF TROUBLES. -> Moreover A requires changing the application code, which is not advisable by best practices. Logging solutions should be simple to implement, not to change your code.
upvoted 1 times
...
KLei
3 months, 2 weeks ago
Selected Answer: A
B is OK if the retention period is 10 years. So A should be the best answer https://cloud.google.com/logging/docs/buckets In the Retention period field, enter the number of days, between 1 day and 3650 days, that you want Cloud Logging to retain your logs. If you don't customize the retention period, the default is 30 days.
upvoted 1 times
...
Pime13
3 months, 4 weeks ago
Selected Answer: B
The best option to meet your requirements is B: Configure your Compute Engine instances to use the Google Cloud's operations suite Cloud Logging agent to send application logs to a custom log bucket in the EUROPE-WEST1 region with a custom retention of 12 years. This solution ensures that: Logs are automatically collected and managed by the Cloud Logging agent, reducing manual overhead. Data is stored within the specified European region. A custom retention policy of 12 years is applied, meeting the business requirement for log preservation. plus: Compute Engine instances do not automatically log into Cloud Logging. You need to install an agent to enable this functionality. Specifically, you can use the Ops Agent, which is recommended for new Google Cloud workloads as it combines both logging and monitoring capabilities
upvoted 1 times
...
MoAk
4 months, 1 week ago
Selected Answer: C
Cos A is hassle, and Google never recommend to mess with app code.
upvoted 1 times
...
BPzen
4 months, 1 week ago
Selected Answer: B
B. Configure your Compute Engine instances to use the Google Cloud's operations suite Cloud Logging agent to send application logs to a custom log bucket in the EUROPE-WEST1 region with a custom retention of 12 years. Option D is not feasible for a 12-year retention requirement because the default log buckets in Google Cloud's operations suite have a fixed retention period of 365 days, which cannot be changed. If the retention requirement exceeds 365 days, a custom log bucket must be used instead.
upvoted 1 times
...
BPzen
4 months, 1 week ago
Selected Answer: B
Option B: Provides a seamless and integrated logging solution while ensuring compliance with location and retention requirements.
upvoted 1 times
...
2ndjuly
4 months, 1 week ago
Selected Answer: B
A is unnecessary complexity
upvoted 1 times
...
MoAk
4 months, 2 weeks ago
Selected Answer: C
Without doubt its between A and C due to obvious retention caveats on log buckets. I choose C because of Google's push to simplify everything and to use their own native services rather than tinkering with your app code. Answer C.
upvoted 1 times
...
KLei
5 months, 1 week ago
Max custom log retention: https://cloud.google.com/logging/docs/buckets#custom-retention
upvoted 2 times
...
Mr_MIXER007
7 months, 1 week ago
Selected Answer: A
Selected Answer: A
upvoted 1 times
...
3d9563b
8 months, 3 weeks ago
Selected Answer: B
Option B is the best approach because it leverages the Google Cloud's operations suite Cloud Logging agent for efficient log collection, ensures compliance with data residency requirements by storing logs in the EUROPE-WEST1 region, and allows for setting a custom retention policy of 12 years. This solution balances operational efficiency with compliance and cost-effectiveness.
upvoted 1 times
...
Roro_Brother
11 months, 1 week ago
Selected Answer: A
A is the solution because you can't have a retentioon more than 3650 days
upvoted 1 times
...
irmingard_examtopics
12 months ago
Selected Answer: C
We need a Cloud Storage bucket not a log bucket, as their max log retention period is 10 years, so B and D are out. A does not minimize overhead as it is additional work. That leaves C in my opinion.
upvoted 3 times
...
Natan97
1 year ago
B is correct. This option totally makes sense because approach points to decrease overhead and optimize cost.
upvoted 1 times
...
Bettoxicity
1 year ago
Selected Answer: A
A With Cloud Storage you can set a maximum retention period of 3,155,760,000 seconds (100 years). You can configure Cloud Logging to retain your logs only between 1 day and 3650 days.
upvoted 2 times
...

Question 159

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 159 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 159
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You discovered that sensitive personally identifiable information (PII) is being ingested to your Google Cloud environment in the daily ETL process from an on- premises environment to your BigQuery datasets. You need to redact this data to obfuscate the PII, but need to re-identify it for data analytics purposes. Which components should you use in your solution? (Choose two.)

  • A. Secret Manager
  • B. Cloud Key Management Service
  • C. Cloud Data Loss Prevention with cryptographic hashing
  • D. Cloud Data Loss Prevention with automatic text redaction
  • E. Cloud Data Loss Prevention with deterministic encryption using AES-SIV
Show Suggested Answer Hide Answer
Suggested Answer: BE 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
GHOST1985
Highly Voted 2 years, 1 month ago
Selected Answer: BE
B: you need KMS to store the CryptoKey https://cloud.google.com/dlp/docs/reference/rest/v2/projects.deidentifyTemplates#crypt E: for the de-identity you need to use CryptoReplaceFfxFpeConfig or CryptoDeterministicConfig https://cloud.google.com/dlp/docs/reference/rest/v2/projects.deidentifyTemplates#cryptodeterministicconfig https://cloud.google.com/dlp/docs/deidentify-sensitive-data
upvoted 14 times
Ric350
1 year, 6 months ago
BE is correct. Ghost links are correct and this link here shows a reference architecture using cloud KMS and Cloud DLP https://cloud.google.com/architecture/de-identification-re-identification-pii-using-cloud-dlp
upvoted 6 times
...
...
mjcts
Most Recent 9 months, 1 week ago
Selected Answer: BE
KMS for storing the encryption key Deterministic encryption so that you can reverse the process
upvoted 1 times
...
gkarthik1919
1 year ago
BE are right. D is incorrect because automatic text redaction will remove the sensitive PII data which is not the requirement .
upvoted 2 times
...
anshad666
1 year, 1 month ago
Selected Answer: BE
looks viable
upvoted 1 times
...
gcpengineer
1 year, 4 months ago
why shd anyone use KMS to determine PII?
upvoted 1 times
Good question.......
upvoted 1 times
...
...
gcpengineer
1 year, 4 months ago
Selected Answer: DE
DE is the ans
upvoted 1 times
gcpengineer
1 year, 4 months ago
BE is the answer
upvoted 1 times
...
...
AzureDP900
1 year, 11 months ago
B & E is right
upvoted 2 times
...
AwesomeGCP
2 years ago
Selected Answer: BE
B. Cloud Key Management Service E. Cloud Data Loss Prevention with deterministic encryption using AES-SIV
upvoted 4 times
...
zellck
2 years ago
Selected Answer: BE
BE is the answer.
upvoted 4 times
...
waikiki
2 years ago
No. As a result of checking the documentation, crypto key = This is a data encryption key (DEK) (as opposed to a key encryption key (KEK) stored by Cloud Key Management Service (Cloud KMS).
upvoted 1 times
Ric350
1 year, 6 months ago
It's BE. BE is correct. Ghost links are correct and this link here shows a reference architecture using cloud KMS and Cloud DLP https://cloud.google.com/architecture/de-identification-re-identification-pii-using-cloud-dlp
upvoted 2 times
...
...

Question 160

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 160 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 160
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are working with a client that is concerned about control of their encryption keys for sensitive data. The client does not want to store encryption keys at rest in the same cloud service provider (CSP) as the data that the keys are encrypting. Which Google Cloud encryption solutions should you recommend to this client?
(Choose two.)

  • A. Customer-supplied encryption keys.
  • B. Google default encryption
  • C. Secret Manager
  • D. Cloud External Key Manager
  • E. Customer-managed encryption keys
Show Suggested Answer Hide Answer
Suggested Answer: AD 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
AwesomeGCP
Highly Voted 1 year, 6 months ago
Selected Answer: AD
A. Customer-supplied encryption keys. D. Cloud External Key Manager
upvoted 6 times
...
DST
Highly Voted 1 year, 6 months ago
Selected Answer: AD
CSEK & EKM both store keys outside of GCP
upvoted 6 times
...
gcpengineer
Most Recent 10 months, 3 weeks ago
what about CMEK?
upvoted 2 times
[Removed]
8 months, 2 weeks ago
in CMEK, even though the keys are managed by customer, they're still using the cloud service Cloud KMS. So it's still in the same Cloud Provider as where the data is which not desired per the question. Reference: https://cloud.google.com/kms/docs/cmek#cmek
upvoted 3 times
...
...
TNT87
1 year ago
Selected Answer: AD
Answer A and D
upvoted 1 times
...
AzureDP900
1 year, 5 months ago
A,D is perfect
upvoted 3 times
...
soltium
1 year, 6 months ago
I'm leaning towards D because CSEK is so limited.
upvoted 1 times
soltium
1 year, 6 months ago
whoops didn't read I need to select two, so AD it is.
upvoted 1 times
...
...

Question 161

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 161 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 161
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are implementing data protection by design and in accordance with GDPR requirements. As part of design reviews, you are told that you need to manage the encryption key for a solution that includes workloads for Compute Engine, Google Kubernetes Engine, Cloud Storage, BigQuery, and Pub/Sub. Which option should you choose for this implementation?

  • A. Cloud External Key Manager
  • B. Customer-managed encryption keys
  • C. Customer-supplied encryption keys
  • D. Google default encryption
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
zellck
Highly Voted 2 years, 6 months ago
Selected Answer: B
B is the answer. https://cloud.google.com/kms/docs/using-other-products#cmek_integrations https://cloud.google.com/kms/docs/using-other-products#cmek_integrations CMEK is supported for all the listed google services.
upvoted 20 times
...
Littleivy
Highly Voted 2 years, 5 months ago
Selected Answer: A
Obviously A is the better answer. Based on the GCP blog [1], you can utilize Cloud External Key Manager (Cloud EKM) to manage customer key easily and fulfill the compliance requirements as Key Access Justifications is already GA. Also, Cloud EKM supports all the services listed in the questions per the reference [2] [1] https://cloud.google.com/blog/products/compliance/how-google-cloud-helps-customers-stay-current-with-gdpr [2] https://cloud.google.com/kms/docs/ekm#supported_services
upvoted 12 times
gcpengineer
1 year, 10 months ago
unfortunately not supported for all services
upvoted 1 times
orcnylmz
1 year, 9 months ago
All services mentioned in the question are supported by EKM https://cloud.google.com/kms/docs/ekm#supported_services
upvoted 3 times
...
...
...
KLei
Most Recent 3 months, 2 weeks ago
Selected Answer: B
The point is the integration with Google native services: Compute Engine, Google Kubernetes Engine, Cloud Storage, BigQuery, and Pub/Sub CMEK covers more services than CSEK. https://medium.com/google-cloud/data-encryption-techniques-in-google-cloud-gmek-cmek-csek-928d072a1e9d "Customer-managed encryption keys (CMEK): This method allows customers to create and manage their own encryption keys in Google Cloud KMS, which are used to encrypt data at rest in Google Cloud Storage, Google BigQuery, Google Cloud SQL, and other services that support CMEK" "Customer-supplied encryption keys (CSEK): This method allows customers to use their own encryption keys to encrypt data at rest in Google Cloud Storage and Google Compute disks."
upvoted 1 times
...
KLei
3 months, 2 weeks ago
Selected Answer: B
Seems CMEK supports all the Google services in the question https://cloud.google.com/kms/docs/compatible-services#cmek_integrations
upvoted 1 times
...
Mr_MIXER007
7 months, 1 week ago
Selected Answer: B
B. Customer-managed encryption keys
upvoted 1 times
...
Roro_Brother
11 months, 1 week ago
Selected Answer: B
B is the answer. https://cloud.google.com/kms/docs/using-other-products#cmek_integrations https://cloud.google.com/kms/docs/using-other-products#cmek_integrations CMEK is supported for all the listed google services.
upvoted 2 times
...
Roro_Brother
11 months, 1 week ago
Selected Answer: B
B. Customer-managed encryption keys With customer-managed encryption keys (CMEK), you have control over the encryption keys used to protect your data in Google Cloud Platform services such as Compute Engine, Google Kubernetes Engine, Cloud Storage, BigQuery, and Pub/Sub. This ensures that you can manage and control the keys in a way that aligns with GDPR requirements and provides an additional layer of security for your data.
upvoted 2 times
...
Bettoxicity
1 year ago
Selected Answer: B
B Why not A?: GCP doesn't offer a service called "Cloud External Key Manager." While there are external key management solutions, they might not integrate seamlessly with all GCP services you're using.
upvoted 2 times
...
glb2
1 year ago
Selected Answer: B
B. Customer-managed encryption keys With customer-managed encryption keys (CMEK), you have control over the encryption keys used to protect your data in Google Cloud Platform services such as Compute Engine, Google Kubernetes Engine, Cloud Storage, BigQuery, and Pub/Sub. This ensures that you can manage and control the keys in a way that aligns with GDPR requirements and provides an additional layer of security for your data.
upvoted 1 times
...
dija123
1 year, 1 month ago
Selected Answer: B
All mentioned services are supported by CMEK
upvoted 1 times
...
Nachtwaker
1 year, 1 month ago
Selected Answer: B
A or B, where B does not require additional assets/resources and thus (sounds like it would be) cheaper
upvoted 3 times
...
b6f53d8
1 year, 2 months ago
I work with banks in Eu, they are using CMEK in general and it is GDPR compliant - B
upvoted 2 times
...
hakunamatataa
1 year, 6 months ago
Selected Answer: A
With my current client in Europe, where GDPR is mandate, we are using EKM.
upvoted 3 times
...
[Removed]
1 year, 8 months ago
Selected Answer: A
Seems to be EKM in conjunction with CMEK to support all the required services. However it's EKM specifically that enables customers to store keys in europe and enforce various controls over their keys as required by GDPR. https://cloud.google.com/blog/products/compliance/how-google-cloud-helps-customers-stay-current-with-gdpr https://cloud.google.com/kms/docs/using-other-products#cmek_integrations
upvoted 4 times
...
TNT87
2 years ago
Selected Answer: B
Cloud External Key Manager (option A) is an option for customers who require full control over their encryption keys while leveraging Google Cloud's Key Management Service. However, this option is generally not required for GDPR compliance.
upvoted 3 times
TNT87
2 years ago
https://cloud.google.com/kms/docs/compatible-services#cmek_integrations
upvoted 1 times
...
...
alleinallein
2 years ago
Selected Answer: A
EKM is GDPR compliant
upvoted 1 times
...
Examster1
2 years, 2 months ago
Answer is A and please read the docs. Cloud EKM is GDPR compliant and does support all the services listed. Where is the confusion here?
upvoted 4 times
gcpengineer
1 year, 10 months ago
It doesn't
upvoted 1 times
...
...

Question 162

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 162 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 162
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Which Identity-Aware Proxy role should you grant to an Identity and Access Management (IAM) user to access HTTPS resources?

  • A. Security Reviewer
  • B. IAP-Secured Tunnel User
  • C. IAP-Secured Web App User
  • D. Service Broker Operator
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
gkarthik1919
6 months, 2 weeks ago
C. https://cloud.google.com/iap/docs/managing-access#:~:text=Use%20the%20IAP%20Policy%20Admin,HTTPS%20resources%20that%20use%20IAP.
upvoted 2 times
...
rottzy
6 months, 2 weeks ago
c. IAP-Secured Web App User: Grants access to the app and other HTTPS resources that use IAP
upvoted 2 times
...
cyberpunk21
7 months, 3 weeks ago
Selected Answer: C
Provide permission to access HTTPS resources which use identity aware proxy
upvoted 1 times
...
[Removed]
8 months, 2 weeks ago
Selected Answer: C
C roles/iap.httpsResourceAccessor https://cloud.google.com/iam/docs/understanding-roles#cloud-iap-roles
upvoted 3 times
...
AzureDP900
1 year, 5 months ago
C is right IAP-secured Web App User (roles/iap.httpsResourceAccessor) Provides permission to access HTTPS resources which use Identity-Aware Proxy.
upvoted 3 times
...
AwesomeGCP
1 year, 6 months ago
Selected Answer: C
C. IAP-Secured Web App User
upvoted 2 times
...
GHOST1985
1 year, 6 months ago
Selected Answer: C
Answer C IAP-Secured Tunnel User: Grants access to tunnel resources that use IAP. IAP-Secured Web App User: Access HTTPS resources which use Identity-Aware Proxy, Grants access to App Engine, Cloud Run, and Compute Engine resources.
upvoted 4 times
...
Random_Mane
1 year, 7 months ago
Selected Answer: C
C, https://cloud.google.com/iap/docs/managing-access "IAP-Secured Web App User: Grants access to the app and other HTTPS resources that use IAP."
upvoted 3 times
...
Baburao
1 year, 7 months ago
Should be C. It is clearly mentioned here in Documentation: https://cloud.google.com/iap/docs/managing-access#roles IAP-Secured Web App User (roles/iap.httpsResourceAccessor)
upvoted 3 times
...

Question 163

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 163 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 163
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You need to audit the network segmentation for your Google Cloud footprint. You currently operate Production and Non-Production infrastructure-as-a-service
(IaaS) environments. All your VM instances are deployed without any service account customization.
After observing the traffic in your custom network, you notice that all instances can communicate freely `" despite tag-based VPC firewall rules in place to segment traffic properly `" with a priority of 1000. What are the most likely reasons for this behavior?

  • A. All VM instances are missing the respective network tags.
  • B. All VM instances are residing in the same network subnet.
  • C. All VM instances are configured with the same network route.
  • D. A VPC firewall rule is allowing traffic between source/targets based on the same service account with priority 999. E . A VPC firewall rule is allowing traffic between source/targets based on the same service account with priority 1001.
Show Suggested Answer Hide Answer
Suggested Answer: AD 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
nah99
4 months, 2 weeks ago
Please separate answers D & E so it's less confusing
upvoted 2 times
...
Mr_MIXER007
7 months, 1 week ago
Selected Answer: AD
All VM instances are missing the respective network tags + A VPC firewall rule is allowing traffic between source/targets based on the same service account with priority 999
upvoted 1 times
...
Bettoxicity
1 year ago
Selected Answer: A
A This scenario would bypass the tag-based firewall rules you've implemented. If VMs lack the intended tags, the firewall rules wouldn't be able to identify and filter traffic based on those tags.
upvoted 1 times
...
dija123
1 year ago
Selected Answer: AD
Answers A,D
upvoted 1 times
...
desertlotus1211
1 year, 7 months ago
Remember you can ONLY use EITHER service account or tags to filter traffic. You cannot mix. https://medium.com/google-cloud/gcp-cloud-vpc-firewall-with-service-accounts-9902661a4021#:~:text=VPC%20firewall%20rules%20let%20you,on%20a%20per%2Dinstance%20basis. Answers A,D
upvoted 3 times
...
[Removed]
1 year, 8 months ago
Selected Answer: AD
A, D Either the VMs are not tagged properly or there's another firewall rule that takes precedence.
upvoted 4 times
...
gcpengineer
1 year, 10 months ago
Selected Answer: D
D is the only answer
upvoted 3 times
...
Ric350
2 years ago
How is D even an option and considered? The question itself clearly states "All your VM instances are deployed WITHOUT any service account customization." That means the firewall rule would NOT let any traffic through as there's no SA on the vm's to apply the rule. A is a likely scenario and could easily be overlooked when deploying. B is very unlikely and one big flat network. C is also likely due to admin mistake and overlooking like A. I'd go with A and C as the answer here. Unless I'm interpreting it wrong or missing something here.
upvoted 3 times
Bettoxicity
1 year ago
You are right, also, how is a rule with priority 1001 going to have priority over another rule with 1000?
upvoted 1 times
...
gcpengineer
1 year, 10 months ago
it means all VMs r using same SA
upvoted 5 times
...
...
GCParchitect2022
2 years, 3 months ago
Selected Answer: AD
A. All VM instances are missing the respective network tags. D. A VPC firewall rule is allowing traffic between source/targets based on the same service account with priority 999. If all the VM instances in your Google Cloud environment are able to communicate freely despite tag-based VPC firewall rules in place, it is likely that the instances are missing the necessary network tags. Without the appropriate tags, the firewall rules will not be able to properly segment the traffic. Another possible reason for this behavior could be the existence of a VPC firewall rule that allows traffic between source and target instances based on the same service account, with a priority of 999. This rule would take precedence over the tag-based firewall rules with a priority of 1000. It is unlikely that all the VM instances are residing in the same network subnet or configured with the same network route, or that there is a VPC firewall rule allowing traffic with a priority of 1001.
upvoted 4 times
...
zanhsieh
2 years, 3 months ago
I hit this question on the real exam. It supposed to choose TWO answers. I would pick CD as my answer. A: WRONG. The question already stated "despite tag-based VPC firewall rules in place to segment traffic properly -- with a priority of 1000" so network tags are already in-place. B: WRONG. The customer could set default network across the globe, and then VMs inside one region subnet could ping VMs inside another region subnet. C: CORRECT. D: CORRECT. E: WRONG. Firewall rules with higher priority shall have less than 1000 as the question stated.
upvoted 1 times
theereechee
2 years, 3 months ago
A & D are correct. You can have tag-based firewall rule in place, but without actually applying the tags to instances, the firewall rule is useless/meaningless.
upvoted 5 times
gcpengineer
1 year, 10 months ago
but only few tags r missing...so all vms shd not able to talk
upvoted 1 times
...
...
...
zanhsieh
2 years, 3 months ago
I hit this question. It supposed to select TWO answers. I would say Option D definitely would be the right answer. The rest one I no idea.
upvoted 2 times
...
adelynllllllllll
2 years, 4 months ago
D: a 999 will overwrite 1000
upvoted 1 times
...
Littleivy
2 years, 5 months ago
Selected Answer: D
The answer is D
upvoted 2 times
...
rotorclear
2 years, 6 months ago
Selected Answer: AD
1001 is lower priority
upvoted 2 times
...
soltium
2 years, 6 months ago
D. priority 999 is a higher priority than 1000, so if 999 has allow all policy then any deny policy with lower priority will not be applied.
upvoted 3 times
JoeBar
1 year, 7 months ago
really confusing, D is enough for traffic to be allowed prior hitting the tagbased rule, but if you combine A & E same applies, if A (missing Tag) then the 1000 rules is missed, but traffic is therefore allowed by 1001 so AE should also work while D is a standalone condition. Really can't make a decision here
upvoted 1 times
...
...
AwesomeGCP
2 years, 6 months ago
Selected Answer: C
C. All VM instances are configured with the same network route.
upvoted 2 times
dat987
2 years, 5 months ago
Do you have any documents for this? Thanks
upvoted 1 times
...
...
redgoose6810
2 years, 6 months ago
maybe A . any idea please.
upvoted 3 times
maxth3mad
2 years, 6 months ago
maybe B too ... same subnet ...
upvoted 3 times
maxth3mad
2 years, 6 months ago
but if a firewall rule is in place, probably A
upvoted 1 times
...
...
...

Question 164

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 164 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 164
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are creating a new infrastructure CI/CD pipeline to deploy hundreds of ephemeral projects in your Google Cloud organization to enable your users to interact with Google Cloud. You want to restrict the use of the default networks in your organization while following Google-recommended best practices. What should you do?

  • A. Enable the constraints/compute.skipDefaultNetworkCreation organization policy constraint at the organization level.
  • B. Create a cron job to trigger a daily Cloud Function to automatically delete all default networks for each project.
  • C. Grant your users the IAM Owner role at the organization level. Create a VPC Service Controls perimeter around the project that restricts the compute.googleapis.com API.
  • D. Only allow your users to use your CI/CD pipeline with a predefined set of infrastructure templates they can deploy to skip the creation of the default networks.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
zellck
Highly Voted 1 year, 6 months ago
Selected Answer: A
A is the answer. https://cloud.google.com/resource-manager/docs/organization-policy/org-policy-constraints - constraints/compute.skipDefaultNetworkCreation This boolean constraint skips the creation of the default network and related resources during Google Cloud Platform Project resource creation where this constraint is set to True. By default, a default network and supporting resources are automatically created when creating a Project resource.
upvoted 5 times
AzureDP900
1 year, 5 months ago
Agreed
upvoted 1 times
...
...
desertlotus1211
Most Recent 7 months ago
https://cloud.google.com/resource-manager/docs/organization-policy/org-policy-constraints
upvoted 2 times
...
shayke
1 year, 3 months ago
Selected Answer: A
A-Org Policy
upvoted 2 times
...
AwesomeGCP
1 year, 6 months ago
Selected Answer: A
A. Enable the constraints/compute.skipDefaultNetworkCreation organization policy constraint at the organization level.
upvoted 4 times
...
Random_Mane
1 year, 7 months ago
Selected Answer: A
A. https://cloud.google.com/resource-manager/docs/organization-policy/org-policy-constraints "This boolean constraint skips the creation of the default network and related resources during Google Cloud Platform Project resource creation where this constraint is set to True. By default, a default network and supporting resources are automatically created when creating a Project resource. constraints/compute.skipDefaultNetworkCreation"
upvoted 2 times
...

Question 165

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 165 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 165
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are a security administrator at your company and are responsible for managing access controls (identification, authentication, and authorization) on Google
Cloud. Which Google-recommended best practices should you follow when configuring authentication and authorization? (Choose two.)

  • A. Use Google default encryption.
  • B. Manually add users to Google Cloud.
  • C. Provision users with basic roles using Google's Identity and Access Management (IAM) service.
  • D. Use SSO/SAML integration with Cloud Identity for user authentication and user lifecycle management.
  • E. Provide granular access with predefined roles.
Show Suggested Answer Hide Answer
Suggested Answer: DE 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
zellck
Highly Voted 6 months, 2 weeks ago
Selected Answer: DE
DE is the answer.
upvoted 6 times
zellck
6 months, 2 weeks ago
https://cloud.google.com/iam/docs/using-iam-securely#least_privilege Basic roles include thousands of permissions across all Google Cloud services. In production environments, do not grant basic roles unless there is no alternative. Instead, grant the most limited predefined roles or custom roles that meet your needs.
upvoted 3 times
...
...
Littleivy
Highly Voted 5 months ago
Selected Answer: DE
Answer is DE of course
upvoted 5 times
...
AzureDP900
Most Recent 5 months, 1 week ago
DE is perfect
upvoted 4 times
...
AwesomeGCP
6 months ago
Selected Answer: DE
D. Use SSO/SAML integration with Cloud Identity for user authentication and user lifecycle management. E. Provide granular access with predefined roles.
upvoted 4 times
...
GHOST1985
6 months, 2 weeks ago
Selected Answer: D
Answer : DE
upvoted 3 times
...

Question 166

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 166 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 166
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You have been tasked with inspecting IP packet data for invalid or malicious content. What should you do?

  • A. Use Packet Mirroring to mirror traffic to and from particular VM instances. Perform inspection using security software that analyzes the mirrored traffic.
  • B. Enable VPC Flow Logs for all subnets in the VPC. Perform inspection on the Flow Logs data using Cloud Logging.
  • C. Configure the Fluentd agent on each VM Instance within the VPC. Perform inspection on the log data using Cloud Logging.
  • D. Configure Google Cloud Armor access logs to perform inspection on the log data.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
zellck
Highly Voted 6 months, 2 weeks ago
Selected Answer: A
A is the answer. https://cloud.google.com/vpc/docs/packet-mirroring Packet Mirroring clones the traffic of specified instances in your Virtual Private Cloud (VPC) network and forwards it for examination. Packet Mirroring captures all traffic and packet data, including payloads and headers.
upvoted 6 times
...
AzureDP900
Most Recent 5 months, 1 week ago
A is right
upvoted 2 times
...
AwesomeGCP
6 months ago
Selected Answer: A
A. Use Packet Mirroring to mirror traffic to and from particular VM instances. Perform inspection using security software that analyzes the mirrored traffic.
upvoted 4 times
...
Random_Mane
7 months, 1 week ago
Selected Answer: A
A. https://cloud.google.com/vpc/docs/packet-mirroring#enterprise_security "Packet Mirroring clones the traffic of specified instances in your Virtual Private Cloud (VPC) network and forwards it for examination. Packet Mirroring captures all traffic and packet data, including payloads and headers."
upvoted 4 times
...
Baburao
7 months, 1 week ago
Sorry, it should be A, not B.
upvoted 4 times
...
Baburao
7 months, 1 week ago
Should be B. VPC FLow logs cannot capture packet information. https://cloud.google.com/vpc/docs/using-packet-mirroring
upvoted 1 times
...

Question 167

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 167 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 167
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You have the following resource hierarchy. There is an organization policy at each node in the hierarchy as shown. Which load balancer types are denied in VPC
A?

  • A. All load balancer types are denied in accordance with the global node's policy.
  • B. INTERNAL_TCP_UDP, INTERNAL_HTTP_HTTPS is denied in accordance with the folder's policy.
  • C. EXTERNAL_TCP_PROXY, EXTERNAL_SSL_PROXY are denied in accordance with the project's policy.
  • D. EXTERNAL_TCP_PROXY, EXTERNAL_SSL_PROXY, INTERNAL_TCP_UDP, and INTERNAL_HTTP_HTTPS are denied in accordance with the folder and project's policies.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
tangac
Highly Voted 2 years, 7 months ago
Selected Answer: A
the good answer is A as indicated here : https://cloud.google.com/load-balancing/docs/org-policy-constraints#gcloud
upvoted 14 times
AzureDP900
2 years, 5 months ago
yes, It is A
upvoted 3 times
...
...
JohnDohertyDoe
Most Recent 3 months, 2 weeks ago
Selected Answer: D
DENY values at a lower level override higher-level policies if they have more restrictive constraints, so answer cannot be A.
upvoted 1 times
...
BPzen
4 months, 1 week ago
Selected Answer: A
Explanation: The global policy applies across the entire resource hierarchy unless explicitly overridden. Because it denies all load balancer types, no load balancers can be created in VPC A. The folder and project policies are redundant in this scenario since they are less restrictive than the global policy.
upvoted 1 times
...
kalbd2212
4 months, 4 weeks ago
Outcome: Both the folder-level and project-level denials will be enforced. This is because they apply to different types of traffic and don't conflict with each other. Essentially, the restrictions are combined. Key Concepts Inheritance: Policies are inherited down the hierarchy. A project inherits policies from its parent folder, and the folder inherits from the organization.   Overriding: A lower level policy can override a higher-level policy only if it is more restrictive. Constraints: Organization Policies use "constraints" to define restrictions. 1 In your case, the constraints are likely related to VPC firewall rules.
upvoted 1 times
...
luamail78
5 months, 2 weeks ago
Selected Answer: D
https://cloud.google.com/resource-manager/docs/organization-policy/org-policy-constraints the org constrain is nor a valid value
upvoted 1 times
nah99
4 months, 2 weeks ago
https://cloud.google.com/resource-manager/reference/rest/v1/Policy#allvalues
upvoted 1 times
...
...
oezgan
1 year ago
i asked Gemini here is the answer: In the scenario you described, the following load balancer types would be denied in a VPC defined within the project in the subfolder: external_tcp_proxy external_ssl_proxy Here's the breakdown of how Org policy constraints are enforced with inheritance: Organization Level Constraint: This denies all load balancers. Subfolder Constraint: This overrides the organization-level constraint and only denies internal_tcp_udp and internal_http_https load balancers. Project Level Constraint: This further refines the allowed types within the subfolder by denying external_tcp_proxy and external_ssl_proxy load balancers.
upvoted 1 times
...
Nachtwaker
1 year, 1 month ago
Selected Answer: D
Policies are inherited, so folder and project must be merged. Keep in mind, deny policies are always applied, and when conflicting with an allow policy the deny has higher prio and will overule the allow. So, merge all the deny policies and the result is D.
upvoted 2 times
...
mjcts
1 year, 3 months ago
Selected Answer: A
"inheritFromParent" param is by default set to "true" if not explicitly set
upvoted 4 times
...
pbrvgl
1 year, 4 months ago
My option is A. If "inheritFromParent" is not explicitly set, the default behavior in GCP if for inheritance to prevail. Based on this assumption, the project inherits from the folder and the organization above, all constraints are merged at the project level.
upvoted 4 times
mjcts
1 year, 3 months ago
This is correct
upvoted 2 times
...
...
steveurkel
1 year, 4 months ago
Answer is C.. If the policy is set to merge with parent, the json output will show: "inheritFromParent": true If the policy is set to replace the parent policy, that line is missing, which is the same as the output in the diagram. Therefore, the parent policy is replaced with the child policies and only the project level conditions are in effect.
upvoted 1 times
...
desertlotus1211
1 year, 7 months ago
The issue we don't know what the value is of 'inheritFromParent'. Is it false of true? If true then A is correct.... if false then C is correct
upvoted 1 times
...
WheresWally
1 year, 11 months ago
The answer should be C Link: https://cloud.google.com/resource-manager/docs/organization-policy/understanding-hierarchy Inheritance A resource node that has an organization policy set by default supersedes any policy set by its parent nodes in the hierarchy. However, if a resource node has set inheritFromParent = true, then the effective Policy of the parent resource is inherited, merged, and reconciled to evaluate the resulting effective policy. Project 2 has an organisation policy set and there's no mention of any inheritance.
upvoted 3 times
gcpengineer
1 year, 10 months ago
why do u assume inheritance is false here?
upvoted 1 times
...
gcpengineer
1 year, 10 months ago
Deny take precendence
upvoted 1 times
...
...
hxhwing
2 years, 3 months ago
Selected Answer: C
Project is not inheriting from parent policy, but customize its own
upvoted 4 times
...
madhu81321
2 years, 4 months ago
Selected Answer: D
There are restrictions at folder level too.
upvoted 2 times
...
TheBuckler
2 years, 6 months ago
NVM - the answer actually is A. The Org has it's own restrictions too!
upvoted 3 times
Table2022
2 years, 5 months ago
Agreed with A, good one!
upvoted 2 times
...
...
TheBuckler
2 years, 6 months ago
The answer is D. We also need to consider the Load Balancer types that are restricted at the Folder level as well as the Project level.
upvoted 2 times
...
[Removed]
2 years, 7 months ago
Selected Answer: A
It's A.
upvoted 2 times
...

Question 168

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 168 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 168
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your security team wants to implement a defense-in-depth approach to protect sensitive data stored in a Cloud Storage bucket. Your team has the following requirements:
✑ The Cloud Storage bucket in Project A can only be readable from Project B.
✑ The Cloud Storage bucket in Project A cannot be accessed from outside the network.
✑ Data in the Cloud Storage bucket cannot be copied to an external Cloud Storage bucket.
What should the security team do?

  • A. Enable domain restricted sharing in an organization policy, and enable uniform bucket-level access on the Cloud Storage bucket.
  • B. Enable VPC Service Controls, create a perimeter around Projects A and B, and include the Cloud Storage API in the Service Perimeter configuration.
  • C. Enable Private Access in both Project A and B's networks with strict firewall rules that allow communication between the networks.
  • D. Enable VPC Peering between Project A and B's networks with strict firewall rules that allow communication between the networks.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Baburao
Highly Voted 1 year, 7 months ago
Should be B. VPC Peering is between organizations not between Projects in an organization. That is Shared VPC. In this case, both projects are in same organization so having VPC Service Controls around both projects with necessary rules should be fine.
upvoted 7 times
GHOST1985
1 year, 6 months ago
Answer is B, but you can ave vpc peering between two projects in the same organization, nothing prevents that if you have only two prjects to communicates vpc peering i better than shared vpc ;)
upvoted 2 times
...
...
anshad666
Most Recent 7 months, 3 weeks ago
Selected Answer: B
A classic example of VPC Service Control perimeter
upvoted 4 times
...
TonytheTiger
1 year, 4 months ago
B: https://cloud.google.com/vpc-service-controls/docs/overview
upvoted 3 times
...
AzureDP900
1 year, 5 months ago
B is right
upvoted 2 times
...
AwesomeGCP
1 year, 6 months ago
Selected Answer: B
B. Enable VPC Service Controls, create a perimeter around Projects A and B, and include the Cloud Storage API in the Service Perimeter configuration.
upvoted 4 times
...
tangac
1 year, 7 months ago
Selected Answer: B
https://www.examtopics.com/discussions/google/view/33958-exam-professional-cloud-security-engineer-topic-1-question/
upvoted 4 times
...

Question 169

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 169 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 169
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You need to create a VPC that enables your security team to control network resources such as firewall rules. How should you configure the network to allow for separation of duties for network resources?

  • A. Set up multiple VPC networks, and set up multi-NIC virtual appliances to connect the networks.
  • B. Set up VPC Network Peering, and allow developers to peer their network with a Shared VPC.
  • C. Set up a VPC in a project. Assign the Compute Network Admin role to the security team, and assign the Compute Admin role to the developers.
  • D. Set up a Shared VPC where the security team manages the firewall rules, and share the network with developers via service projects.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
AzureDP900
Highly Voted 1 year, 11 months ago
D. Set up a Shared VPC where the security team manages the firewall rules, and share the network with developers via service projects.
upvoted 6 times
...
Bettoxicity
Most Recent 6 months, 1 week ago
Selected Answer: D
D. Shared VPC: This feature allows centralizing network management within a host project (managed by the security team). Service projects (managed by developers) can then be linked to the Shared VPC, inheriting the network configuration and firewall rules.
upvoted 1 times
...
AwesomeGCP
2 years ago
Selected Answer: D
D. Set up a Shared VPC where the security team manages the firewall rules, and share the network with developers via service projects.
upvoted 4 times
...
zellck
2 years ago
Selected Answer: D
D is the answer.
upvoted 3 times
...
jitu028
2 years ago
Answer is D
upvoted 2 times
...

Question 170

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 170 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 170
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are onboarding new users into Cloud Identity and discover that some users have created consumer user accounts using the corporate domain name. How should you manage these consumer user accounts with Cloud Identity?

  • A. Use Google Cloud Directory Sync to convert the unmanaged user accounts.
  • B. Create a new managed user account for each consumer user account.
  • C. Use the transfer tool for unmanaged user accounts.
  • D. Configure single sign-on using a customer's third-party provider.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
zellck
Highly Voted 1 year, 6 months ago
Selected Answer: C
C is the answer. https://support.google.com/a/answer/6178640?hl=en The transfer tool enables you to see what unmanaged users exist, and then invite those unmanaged users to the domain.
upvoted 5 times
AzureDP900
1 year, 5 months ago
C is right
upvoted 2 times
...
...
GHOST1985
Highly Voted 1 year, 6 months ago
Selected Answer: C
https://cloud.google.com/architecture/identity/migrating-consumer-accounts#finding_unmanaged_user_accounts
upvoted 5 times
...
Andrei_Z
Most Recent 7 months, 1 week ago
Selected Answer: A
Option A, using Google Cloud Directory Sync (GCDS), is the more appropriate choice if you want to convert the @gmail.com accounts to use your corporate domain. GCDS allows you to synchronize user accounts and make changes like updating the email address domain to match your company's domain. This would effectively convert the accounts to use the corporate domain for their email addresses.
upvoted 1 times
...
AwesomeGCP
1 year, 6 months ago
Selected Answer: C
C. Use the transfer tool for unmanaged user accounts.
upvoted 3 times
...
Random_Mane
1 year, 7 months ago
Selected Answer: C
C. https://cloud.google.com/architecture/identity/migrating-consumer-accounts "In addition to showing you all unmanaged accounts, the transfer tool for unmanaged users lets you initiate an account transfer by sending an account transfer request. Initially, an account is listed as Not yet invited, indicating that no transfer request has been sent."
upvoted 3 times
...
Baburao
1 year, 7 months ago
C seems to be correct option in these situation. https://support.google.com/cloudidentity/answer/7062710?hl=en
upvoted 2 times
...

Question 171

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 171 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 171
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You have created an OS image that is hardened per your organization's security standards and is being stored in a project managed by the security team. As a
Google Cloud administrator, you need to make sure all VMs in your Google Cloud organization can only use that specific OS image while minimizing operational overhead. What should you do? (Choose two.)

  • A. Grant users the compute.imageUser role in their own projects.
  • B. Grant users the compute.imageUser role in the OS image project.
  • C. Store the image in every project that is spun up in your organization.
  • D. Set up an image access organization policy constraint, and list the security team managed project in the project's allow list.
  • E. Remove VM instance creation permission from users of the projects, and only allow you and your team to create VM instances.
Show Suggested Answer Hide Answer
Suggested Answer: BD 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
zellck
Highly Voted 2 years ago
Selected Answer: BD
BD is the answer. https://cloud.google.com/resource-manager/docs/organization-policy/org-policy-constraints - constraints/compute.trustedImageProjects This list constraint defines the set of projects that can be used for image storage and disk instantiation for Compute Engine. If this constraint is active, only images from trusted projects will be allowed as the source for boot disks for new instances.
upvoted 10 times
AzureDP900
1 year, 11 months ago
Thank you for sharing link, BD correct
upvoted 1 times
...
...
Bettoxicity
Most Recent 6 months, 1 week ago
Selected Answer: BD
BD. To ensure all VMs in your organization use the specific hardened OS image while minimizing operational overhead, you should choose two options that achieve: 1. Centralized Image Management: The image should be stored in a single, secure location. 2. Restricted Image Access: VMs across the organization should only be able to access this specific image.
upvoted 2 times
...
Xoxoo
1 year ago
Selected Answer: BD
To make sure all VMs in your Google Cloud organization can only use that specific OS image while minimizing operational overhead, you can take the following steps: 1) Grant users the compute.imageUser role in the OS image project . This allows users to use the OS image in their projects without granting them additional permissions . 2) Set up an image access organization policy constraint, and list the security team managed project in the project’s allow list . This ensures that only authorized users can access the OS image . Therefore, options B and D are the correct answers.
upvoted 2 times
...
cyberpunk21
1 year, 1 month ago
Selected Answer: BD
BD are correct
upvoted 2 times
...
Littleivy
1 year, 11 months ago
Selected Answer: BD
Need to grant permission of project owned the image
upvoted 2 times
...
rrvv
1 year, 11 months ago
Answer should be B and D review the example listed here to grant the IAM policy to a service account https://cloud.google.com/deployment-manager/docs/configuration/using-images-from-other-projects-for-vm-instances#granting_access_to_images
upvoted 2 times
Littleivy
1 year, 11 months ago
Need to grant permission of project owned the image
upvoted 1 times
...
...
AwesomeGCP
2 years ago
Selected Answer: BD
B. Grant users the compute.imageUser role in the OS image project. D. Set up an image access organization policy constraint, and list the security team managed project in the project's allow list.
upvoted 3 times
...
GHOST1985
2 years ago
Selected Answer: AD
the compute.imageUser is a Permission to list and read images without having other permissions on the image. Granting this role at the project level gives users the ability to list all images in the project and create resources, such as instances and persistent disks, based on images in the project. https://cloud.google.com/compute/docs/access/iam#compute.imageUser
upvoted 3 times
GHOST1985
2 years ago
Sorry Answer BD
upvoted 2 times
...
...
Baburao
2 years, 1 month ago
I think it should be BD instead of AD. Users should have access to the project where the secured image is stored which is "Security Team's project". Users will obviously need permission to create VM in their own project but to use image from another project, they need "imageUser" permission on that project.
upvoted 3 times
...

Question 172

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 172 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 172
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You're developing the incident response plan for your company. You need to define the access strategy that your DevOps team will use when reviewing and investigating a deployment issue in your Google Cloud environment. There are two main requirements:
✑ Least-privilege access must be enforced at all times.
✑ The DevOps team must be able to access the required resources only during the deployment issue.
How should you grant access while following Google-recommended best practices?

  • A. Assign the Project Viewer Identity and Access Management (IAM) role to the DevOps team.
  • B. Create a custom IAM role with limited list/view permissions, and assign it to the DevOps team.
  • C. Create a service account, and grant it the Project Owner IAM role. Give the Service Account User Role on this service account to the DevOps team.
  • D. Create a service account, and grant it limited list/view permissions. Give the Service Account User Role on this service account to the DevOps team.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Baburao
Highly Voted 2 years, 7 months ago
I think the answer should D. Option B gives them "Always On" permissions but the question asks for "Just in time" permissions. So, this is possible only with a Service Account. Once the incident response team resolves the issue, the service account key can be disabled.
upvoted 17 times
pfilourenco
1 year, 8 months ago
You can create "Just in time" permissions with IAM conditions.
upvoted 7 times
...
...
Mauratay
Most Recent 1 month, 1 week ago
Selected Answer: B
It follows best practices and has traceability
upvoted 1 times
...
KLei
3 months, 2 weeks ago
Selected Answer: D
IAM role to DevOps team member is wrong - not fulfill least privilege principle Service account with "limited list/view permissions" to DevOps team member is correct - least privilege principle - more flexibility
upvoted 2 times
...
Pime13
4 months ago
Selected Answer: B
i vote B. Options A and C grant broader permissions than necessary, which does not align with the least-privilege principle. Option D involves using a service account, which is not the best practice for granting temporary access to human users. By creating a custom IAM role, you ensure that the DevOps team has the precise permissions needed for their tasks, and you can easily adjust or revoke these permissions as necessary
upvoted 2 times
...
BPzen
4 months, 1 week ago
Selected Answer: D
Why Option D is Best: Least-Privilege Access: Permissions are limited to only what is necessary for the investigation by tailoring the service account’s IAM role. Controlled Access: By managing the service account or its impersonation permissions, you can ensure the DevOps team can access the resources only during deployment issues.
upvoted 1 times
...
Mr_MIXER007
7 months, 1 week ago
Selected Answer: D
D. Create a service account, and grant it limited list/view permissions. Give the Service Account User Role on this service account to the DevOps team. This option allows you to create a service account with limited access rights (list/view), and the DevOps team will be able to use this service account only when needed. This is consistent with the principle of least privilege and incident-only access.
upvoted 1 times
...
Mr_MIXER007
7 months, 1 week ago
Selected Answer: D
D. Create a service account, and grant it limited list/view permissions. Give the Service Account User Role on this service account to the DevOps team. This option allows you to create a service account with limited access rights (list/view), and the DevOps team will be able to use this service account only when needed. This is consistent with the principle of least privilege and incident-only access.
upvoted 1 times
...
jujanoso
9 months ago
Selected Answer: D
D. This approach allows the creation of a service account with specific limited permissions necessary for investigating deployment issues. The DevOps team can then be granted the Service Account User role on this service account. This setup ensures that the DevOps team can use the service account with appropriate permissions only when needed, fulfilling both requirements of least-privilege access and temporary access
upvoted 1 times
...
shanwford
11 months, 3 weeks ago
Selected Answer: D
Its (D) according https://cloud.google.com/iam/docs/best-practices-service-accounts "Some applications only require access to certain resources at specific times or under specific circumstances....In such scenarios, using a single service account and granting it access to all resources goes against the principle of least privilege"
upvoted 2 times
...
Bettoxicity
1 year ago
Selected Answer: D
D. -Least Privilege: By creating a service account with restricted permissions (limited list/view access to specific resources), you adhere to the principle of least privilege. The DevOps team can only access the information needed for investigation without broader project-level control. -Temporary Access: Service accounts are not tied to individual users. Once the investigation is complete, you can simply revoke access to the service account from the DevOps team, effectively removing their access to the resources. This ensures temporary access for the specific incident.
upvoted 1 times
...
glb2
1 year ago
Selected Answer: B
Answer is B, it sets least-privilege access.
upvoted 2 times
...
dija123
1 year, 1 month ago
Selected Answer: D
Any DevOps Engineer knows verywell, it is D
upvoted 1 times
...
Nachtwaker
1 year, 1 month ago
Selected Answer: B
B or D, I prefer B because of traceability, impersonating an account is harder to audit in relation to using personal account.
upvoted 3 times
...
dija123
1 year, 1 month ago
Selected Answer: D
I go with D, While B seems to allows defining specific permissions, it adds complexity to the access control strategy and might still grant more access than necessary.
upvoted 1 times
...
JoaquinJimenezGarcia
1 year, 4 months ago
Selected Answer: B
B follows the google best practices
upvoted 3 times
...
rglearn
1 year, 6 months ago
Selected Answer: B
Answer should be B
upvoted 2 times
...
desertlotus1211
1 year, 7 months ago
The real answer shouldn be 'breakglass' tool.
upvoted 2 times
...

Question 173

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 173 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 173
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are working with a client who plans to migrate their data to Google Cloud. You are responsible for recommending an encryption service to manage their encrypted keys. You have the following requirements:
✑ The master key must be rotated at least once every 45 days.
✑ The solution that stores the master key must be FIPS 140-2 Level 3 validated.
✑ The master key must be stored in multiple regions within the US for redundancy.
Which solution meets these requirements?

  • A. Customer-managed encryption keys with Cloud Key Management Service
  • B. Customer-managed encryption keys with Cloud HSM
  • C. Customer-supplied encryption keys
  • D. Google-managed encryption keys
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
shetniel
Highly Voted 1 year ago
Selected Answer: B
The only 2 options that satisfy FIPS 140-2 Level 3 requirement are Cloud HSM or Cloud EKM. https://cloud.google.com/kms/docs/key-management-service#choose
upvoted 10 times
...
AwesomeGCP
Highly Voted 1 year, 6 months ago
Selected Answer: B
B. Customer-managed encryption keys with Cloud HSM
upvoted 7 times
...
KLei
Most Recent 3 months, 2 weeks ago
Selected Answer: B
Cloud HSM helps you enforce regulatory compliance for your workloads in Google Cloud. With Cloud HSM, you can generate encryption keys and perform cryptographic operations in FIPS 140-2 Level 3 validated HSMs.
upvoted 1 times
...
Xoxoo
6 months, 3 weeks ago
Selected Answer: B
To meet the given requirements, you should recommend using Customer-managed encryption keys with Cloud HSM. This solution allows you to manage your own encryption keys while leveraging the Google Cloud Hardware Security Module (HSM) service, which is FIPS 140-2 Level 3 validated. With Cloud HSM, you can rotate the master key at least once every 45 days and store it in multiple regions within the US for redundancy. While Customer-managed encryption keys with Cloud Key Management Service (KMS) (option A) is a valid choice for managing encryption keys, it does not provide the FIPS 140-2 Level 3 validation required by the given requirements. Customer-supplied encryption keys (option C) are not suitable for this scenario as they do not offer the same level of control and security as customer-managed keys. Google-managed encryption keys (option D) would not meet the requirement of having a solution that stores the master key validated at FIPS 140-2 Level 3.
upvoted 6 times
...
cyberpunk21
7 months, 2 weeks ago
Selected Answer: B
In all options only HMS have L3 validation
upvoted 1 times
...
TonytheTiger
1 year, 4 months ago
Answer: B https://cloud.google.com/docs/security/cloud-hsm-architecture
upvoted 2 times
...
Littleivy
1 year, 5 months ago
Selected Answer: B
Cloud HSM is right answer
upvoted 4 times
...
AzureDP900
1 year, 5 months ago
Cloud HSM is right answer is B
upvoted 2 times
...
soltium
1 year, 6 months ago
B.Cloud HSM can be rotated automatically(same front end as KMS), FIPS 140-2 level 3 validated, support multi-region.
upvoted 4 times
...
zellck
1 year, 6 months ago
Selected Answer: B
B is the answer.
upvoted 4 times
...
Sav94
1 year, 7 months ago
Both A and B. But question ask for redundancy. So I think it's A.
upvoted 1 times
...
Random_Mane
1 year, 7 months ago
Selected Answer: B
B. https://cloud.google.com/docs/security/key-management-deep-dive https://cloud.google.com/kms/docs/faq "Keys generated with protection level HSM, and the cryptographic operations performed with them, comply with FIPS 140-2 Level 3."
upvoted 3 times
...
Baburao
1 year, 7 months ago
This should be definitely A. Only Cloud KMS supports FIPS 140-2 levels1, 2 and 3. https://cloud.google.com/kms/docs/faq#standards
upvoted 1 times
Arturo_Cloud
1 year, 7 months ago
I disagree with you, you are being asked only for FIPS 140-2 Level 3 and multiple availability, so B) is the best answer. Here is the much more detailed evidence. https://cloud.google.com/docs/security/cloud-hsm-architecture
upvoted 4 times
...
...

Question 174

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 174 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 174
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You manage your organization's Security Operations Center (SOC). You currently monitor and detect network traffic anomalies in your VPCs based on network logs. However, you want to explore your environment using network payloads and headers. Which Google Cloud product should you use?

  • A. Cloud IDS
  • B. VPC Service Controls logs
  • C. VPC Flow Logs
  • D. Google Cloud Armor
  • E. Packet Mirroring
Show Suggested Answer Hide Answer
Suggested Answer: E 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
zellck
Highly Voted 2 years, 6 months ago
Selected Answer: E
E is the answer. https://cloud.google.com/vpc/docs/packet-mirroring Packet Mirroring clones the traffic of specified instances in your Virtual Private Cloud (VPC) network and forwards it for examination. Packet Mirroring captures all traffic and packet data, including payloads and headers.
upvoted 10 times
...
kalyan_krishna742020
Highly Voted 2 years, 4 months ago
It should be A.. Cloud IDS inspects not only the IP header of the packet, but also the payload. https://cloud.google.com/blog/products/identity-security/how-google-cloud-ids-helps-detect-advanced-network-threats
upvoted 8 times
...
JohnDohertyDoe
Most Recent 3 months, 2 weeks ago
Selected Answer: A
Both A and E would work, but in this case I believe Cloud IDS is a better fit as it is monitor and prevent network anomalies.
upvoted 1 times
...
Pime13
4 months ago
Selected Answer: E
https://cloud.google.com/vpc/docs/packet-mirroring Packet Mirroring clones the traffic of specified instances in your Virtual Private Cloud (VPC) network and forwards it for examination. Packet Mirroring captures all traffic and packet data, including payloads and headers. The capture can be configured for both egress and ingress traffic, only ingress traffic, or only egress traffic. The mirroring happens on the virtual machine (VM) instances, not on the network. Consequently, Packet Mirroring consumes additional bandwidth on the VMs. Packet Mirroring is useful when you need to monitor and analyze your security status. It exports all traffic, not only the traffic between sampling periods. For example, you can use security software that analyzes mirrored traffic to detect all threats or anomalies. Additionally, you can inspect the full traffic flow to detect application performance issues.
upvoted 1 times
...
MoAk
4 months, 3 weeks ago
Answer previously would have been E however, I believe this now should be Answer A - Cloud IDS
upvoted 2 times
...
Bettoxicity
1 year ago
Selected Answer: E
E. Packet Mirroring allows you to replicate network traffic flowing through your VPCs to a designated destination. This destination can be a dedicated instance or a network analysis tool. With full packet capture, you can inspect the contents of network payloads and headers, providing a deeper level of network traffic analysis compared to just flow logs.
upvoted 1 times
...
desertlotus1211
1 year, 7 months ago
Answer is A: It askes for 'Google Cloud Product'. Cloud IDS includes packet mirroring and built with Palo Alto threat detection. https://www.happtiq.com/cloud-ids/ After an endpoint has been specified, traffic from specific instances is cloned by setting up a packet mirroring policy. All the data from the traffic along with packet data, payloads, and headers is forwarded to Cloud IDS for examination.
upvoted 2 times
...
cyberpunk21
1 year, 7 months ago
Selected Answer: E
E is the answer
upvoted 1 times
desertlotus1211
1 year, 7 months ago
Answer is A: It askes for 'Google Cloud Product'. Cloud IDS includes packet mirroring and built with Palo Alto threat detection. https://www.happtiq.com/cloud-ids/ After an endpoint has been specified, traffic from specific instances is cloned by setting up a packet mirroring policy. All the data from the traffic along with packet data, payloads, and headers is forwarded to Cloud IDS for examination.
upvoted 1 times
...
...
gcpengineer
1 year, 10 months ago
Selected Answer: A
cloud IDS is based on packet mirroring and asked for product to analyse. so A is the ans
upvoted 3 times
...
AzureDP900
2 years, 5 months ago
E Packet Mirroring captures all traffic and packet data, including payloads and headers. The capture can be configured for both egress and ingress traffic, only ingress traffic, or only egress traffic.
upvoted 3 times
...
hello_gcp_devops
2 years, 5 months ago
Packet Mirroring clones the traffic of specified instances in your Virtual Private Cloud (VPC) network and forwards it for examination. Packet Mirroring captures all traffic and packet data, including payloads and headers. The capture can be configured for both egress and ingress traffic, only ingress traffic, or only egress traffic.
upvoted 1 times
hello_gcp_devops
2 years, 5 months ago
E is the answer
upvoted 2 times
...
...
Random_Mane
2 years, 7 months ago
Selected Answer: E
https://cloud.google.com/vpc/docs/packet-mirroring
upvoted 3 times
...

Question 175

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 175 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 175
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are consulting with a client that requires end-to-end encryption of application data (including data in transit, data in use, and data at rest) within Google Cloud.
Which options should you utilize to accomplish this? (Choose two.)

  • A. External Key Manager
  • B. Customer-supplied encryption keys
  • C. Hardware Security Module
  • D. Confidential Computing and Istio
  • E. Client-side encryption
Show Suggested Answer Hide Answer
Suggested Answer: DE 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
GHOST1985
Highly Voted 2 years, 6 months ago
Selected Answer: DE
Confidential Computing enables encryption for "data-in-use" Client Side encryption enables security for "data in transit" from Customer site to GCP Once data is at rest, use Google's default encryption for "data at rest"
upvoted 12 times
...
Baburao
Highly Voted 2 years, 7 months ago
I feel this should be DE. Confidential Computing enables encryption for "data-in-use" Client Side encryption enables security for "data in transit" from Customer site to GCP Once data is at rest, use Google's default encryption for "data at rest"
upvoted 8 times
...
Pime13
Most Recent 4 months ago
Selected Answer: DE
Confidential Computing and Istio (Option D): Confidential Computing protects data in use by running workloads in secure enclaves, ensuring that data remains encrypted even during processing. Istio can help secure data in transit by providing mutual TLS (mTLS) for service-to-service communication within your Kubernetes clusters. Client-side encryption (Option E): Client-side encryption ensures that data is encrypted before it is sent to Google Cloud, protecting data in transit and at rest. This approach allows you to maintain control over the encryption keys and ensures that data is encrypted throughout its lifecycle.
upvoted 1 times
...
DattaHinge
6 months, 2 weeks ago
Selected Answer: BC
B. Customer-supplied encryption keys: This is crucial for achieving true end-to-end encryption. By providing your own encryption keys, you maintain complete control over the data, even Google Cloud cannot decrypt it without your keys. C. Hardware Security Module (HSM): HSMs provide a secure environment for storing and managing your encryption keys. This adds an extra layer of security, ensuring that your keys are protected from unauthorized access.
upvoted 2 times
...
MFay
11 months, 2 weeks ago
Answer BD. To accomplish end-to-end encryption of application data within Google Cloud, including data in transit, data in use, and data at rest, you should utilize the following options: B. Customer-supplied encryption keys - Customer-supplied encryption keys (CSEK) allow you to use your own encryption keys to protect your data at rest in Google Cloud, ensuring that your data is encrypted with keys that you control. D. Confidential Computing and Istio - Confidential Computing provides a hardware-based trusted execution environment (TEE) to protect data in use, ensuring that sensitive workloads and data remain encrypted while being processed. Istio can be used for securing data in transit within Google Cloud. Therefore, the correct answers are: **B. Customer-supplied encryption keys** **D. Confidential Computing and Istio**
upvoted 2 times
...
desertlotus1211
1 year, 7 months ago
I'll go with answer CD: https://cloud.google.com/kubernetes-engine/docs/how-to/encrypting-secrets#creating-key
upvoted 2 times
...
Andrei_Z
1 year, 7 months ago
Selected Answer: BD
Option E (Client-side encryption) typically refers to encrypting data on the client side before sending it to the cloud, and it can complement the other options but is not one of the primary mechanisms for achieving end-to-end encryption within Google Cloud itself.
upvoted 3 times
desertlotus1211
1 year, 7 months ago
the key in the question is 'within GCP'... So E cannot be correct
upvoted 2 times
...
...
cyberpunk21
1 year, 7 months ago
Selected Answer: DE
D - Ensures encryption for data in use and transit E - Ensures Encryption at rest
upvoted 2 times
...
TNT87
2 years ago
Selected Answer: BE
Why not B, E?
upvoted 1 times
gcpengineer
1 year, 10 months ago
how u will ensure data is getting encrypted at transit
upvoted 1 times
...
...
pmriffo
2 years, 3 months ago
https://cloud.google.com/compute/confidential-vm/docs/about-cvm#end-to-end_encryption
upvoted 1 times
...
Littleivy
2 years, 5 months ago
Selected Answer: DE
Google Cloud customers with additional requirements for encryption of data over WAN can choose to implement further protections for data as it moves from a user to an application, or virtual machine to virtual machine. These protections include IPSec tunnels, Gmail S/MIME, managed SSL certificates, and Istio. https://cloud.google.com/docs/security/encryption-in-transit
upvoted 4 times
...
AwesomeGCP
2 years, 6 months ago
Selected Answer: DE
D. Confidential Computing and Istio E. Client-side encryption
upvoted 3 times
...
zellck
2 years, 6 months ago
Selected Answer: AE
AE is my answer.
upvoted 1 times
...

Question 176

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 176 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 176
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You need to enforce a security policy in your Google Cloud organization that prevents users from exposing objects in their buckets externally. There are currently no buckets in your organization. Which solution should you implement proactively to achieve this goal with the least operational overhead?

  • A. Create an hourly cron job to run a Cloud Function that finds public buckets and makes them private.
  • B. Enable the constraints/storage.publicAccessPrevention constraint at the organization level.
  • C. Enable the constraints/storage.uniformBucketLevelAccess constraint at the organization level.
  • D. Create a VPC Service Controls perimeter that protects the storage.googleapis.com service in your projects that contains buckets. Add any new project that contains a bucket to the perimeter.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
cyberpunk21
7 months, 2 weeks ago
Selected Answer: B
B is correct, C talks about access which we don't need
upvoted 2 times
...
pedrojorge
1 year, 2 months ago
Selected Answer: B
B, "When you apply the publicAccessPrevention constraint on a resource, public access is restricted for all buckets and objects, both new and existing, under that resource."
upvoted 4 times
...
TonytheTiger
1 year, 4 months ago
Exam Question Dec 2022
upvoted 3 times
...
AzureDP900
1 year, 5 months ago
B is right
upvoted 2 times
AzureDP900
1 year, 5 months ago
Public access prevention protects Cloud Storage buckets and objects from being accidentally exposed to the public. When you enforce public access prevention, no one can make data in applicable buckets public through IAM policies or ACLs. There are two ways to enforce public access prevention: You can enforce public access prevention on individual buckets. If your bucket is contained within an organization, you can enforce public access prevention by using the organization policy constraint storage.publicAccessPrevention at the project, folder, or organization level.
upvoted 2 times
...
...
AwesomeGCP
1 year, 6 months ago
Selected Answer: B
B. Enable the constraints/storage.publicAccessPrevention constraint at the organization level.
upvoted 2 times
...
zellck
1 year, 6 months ago
Selected Answer: B
B is the answer. https://cloud.google.com/storage/docs/public-access-prevention Public access prevention protects Cloud Storage buckets and objects from being accidentally exposed to the public. If your bucket is contained within an organization, you can enforce public access prevention by using the organization policy constraint storage.publicAccessPrevention at the project, folder, or organization level.
upvoted 4 times
...
Random_Mane
1 year, 7 months ago
Selected Answer: B
B. https://cloud.google.com/storage/docs/org-policy-constraints "When you apply the publicAccessPrevention constraint on a resource, public access is restricted for all buckets and objects, both new and existing, under that resource."
upvoted 2 times
...

Question 177

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 177 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 177
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your company requires the security and network engineering teams to identify all network anomalies and be able to capture payloads within VPCs. Which method should you use?

  • A. Define an organization policy constraint.
  • B. Configure packet mirroring policies.
  • C. Enable VPC Flow Logs on the subnet.
  • D. Monitor and analyze Cloud Audit Logs.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
zellck
Highly Voted 1 year, 6 months ago
Selected Answer: B
B is the answer. https://cloud.google.com/vpc/docs/packet-mirroring Packet Mirroring clones the traffic of specified instances in your Virtual Private Cloud (VPC) network and forwards it for examination. Packet Mirroring captures all traffic and packet data, including payloads and headers.
upvoted 7 times
AzureDP900
1 year, 5 months ago
B is right .
upvoted 2 times
AzureDP900
1 year, 5 months ago
Packet Mirroring is useful when you need to monitor and analyze your security status. It exports all traffic, not only the traffic between sampling periods. For example, you can use security software that analyzes mirrored traffic to detect all threats or anomalies. Additionally, you can inspect the full traffic flow to detect application performance issues.
upvoted 3 times
...
...
...
desertlotus1211
Most Recent 7 months ago
Should be Cloud IDS ;)
upvoted 1 times
...
cyberpunk21
7 months, 2 weeks ago
Selected Answer: B
B is correct
upvoted 1 times
...
AwesomeGCP
1 year, 6 months ago
Selected Answer: B
B. Configure packet mirroring policies.
upvoted 3 times
...
Random_Mane
1 year, 7 months ago
Selected Answer: B
https://cloud.google.com/vpc/docs/packet-mirroring
upvoted 2 times
...

Question 178

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 178 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 178
Topic #: 1
[All Professional Cloud Security Engineer Questions]

An organization wants to track how bonus compensations have changed over time to identify employee outliers and correct earning disparities. This task must be performed without exposing the sensitive compensation data for any individual and must be reversible to identify the outlier.

Which Cloud Data Loss Prevention API technique should you use?

  • A. Cryptographic hashing
  • B. Redaction
  • C. Format-preserving encryption
  • D. Generalization
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
mjcts
9 months, 1 week ago
Selected Answer: C
C - it's reversible
upvoted 1 times
...
i_am_robot
9 months, 4 weeks ago
Selected Answer: C
The best option would be C. Format-preserving encryption. Format-preserving encryption (FPE) allows you to encrypt sensitive data in a way that maintains the format of the input data. This is particularly useful when you need to use encrypted data in systems that require data in a specific format. Importantly, FPE is reversible, meaning you can decrypt the data back to its original form when necessary. This would allow the organization to track changes over time and identify outliers, without exposing sensitive compensation data.
upvoted 3 times
...
cyberpunk21
1 year, 1 month ago
Selected Answer: C
C is correct
upvoted 1 times
...
ymkk
1 year, 1 month ago
Selected Answer: C
format-preserving encryption is the best technique because: - It preserves the original data format - It is reversible - It allows operations like sorting and searching - It protects the sensitive data through encryption except when needed to identify outliers
upvoted 4 times
...
akg001
1 year, 1 month ago
Selected Answer: D
D - right
upvoted 2 times
...
Mithung30
1 year, 2 months ago
Selected Answer: C
Correct is C
upvoted 1 times
...
marrechea
1 year, 6 months ago
Answer C
upvoted 2 times
...
TNT87
1 year, 6 months ago
Selected Answer: C
Answer C
upvoted 3 times
...
TNT87
1 year, 6 months ago
Selected Answer: D
Answer D
upvoted 1 times
TNT87
1 year, 6 months ago
Generalization is irrevesible . thats makes C the answer
upvoted 2 times
...
...

Question 179

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 179 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 179
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You need to set up a Cloud Interconnect connection between your company’s on-premises data center and VPC host network. You want to make sure that on-premises applications can only access Google APIs over the Cloud Interconnect and not through the public internet. You are required to only use APIs that are supported by VPC Service Controls to mitigate against exfiltration risk to non-supported APIs. How should you configure the network?

  • A. Enable Private Google Access on the regional subnets and global dynamic routing mode.
  • B. Create a CNAME to map *.googleapis.com to restricted.googleapis.com, and create A records for restricted.googleapis.com mapped to 199.36.153.8/30.
  • C. Use private.googleapis.com to access Google APIs using a set of IP addresses only routable from within Google Cloud, which are advertised as routes over the connection.
  • D. Use restricted googleapis.com to access Google APIs using a set of IP addresses only routable from within Google Cloud, which are advertised as routes over the Cloud Interconnect connection.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
KLei
3 months, 2 weeks ago
Selected Answer: D
Enables API access to Google APIs and services that are supported by VPC Service Controls. Blocks access to Google APIs and services that do not support VPC Service Controls. Does not support Google Workspace APIs or Google Workspace web applications such as Gmail and Google Docs
upvoted 1 times
...
shmoeee
6 months, 3 weeks ago
This is a repeated question
upvoted 1 times
...
cyberpunk21
1 year, 1 month ago
Selected Answer: D
D is correct, A - doesn't address the issue B - Looks good but for restricted API the subnet address will be 199.36.153.4/30 not 8/30 c - wrong D - everything looks good
upvoted 4 times
...
arpgaur
1 year, 1 month ago
D, use restricted google.apis.com. https://cloud.google.com/vpc/docs/configure-private-google-access-hybrid
upvoted 4 times
...
Sanjana2020
1 year, 2 months ago
D, restricted
upvoted 3 times
...

Question 180

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 180 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 180
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization develops software involved in many open source projects and is concerned about software supply chain threats. You need to deliver provenance for the build to demonstrate the software is untampered.

What should you do?

  • A. 1. Hire an external auditor to review and provide provenance.
    2. Define the scope and conditions.
    3. Get support from the Security department or representative.
    4. Publish the attestation to your public web page.
  • B. 1. Review the software process.
    2. Generate private and public key pairs and use Pretty Good Privacy (PGP) protocols to sign the output software artifacts together with a file containing the address of your enterprise and point of contact.
    3. Publish the PGP signed attestation to your public web page.
  • C. 1. Publish the software code on GitHub as open source.
    2. Establish a bug bounty program, and encourage the open source community to review, report, and fix the vulnerabilities.
  • D. 1. Generate Supply Chain Levels for Software Artifacts (SLSA) level 3 assurance by using Cloud Build.
    2. View the build provenance in the Security insights side panel within the Google Cloud console.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
wojtek85
Highly Voted 1 year, 1 month ago
D is correct: https://cloud.google.com/build/docs/securing-builds/view-build-provenance
upvoted 6 times
...
i_am_robot
Most Recent 9 months, 4 weeks ago
Selected Answer: D
The best option would be D. Generate Supply Chain Levels for Software Artifacts (SLSA) level 3 assurance by using Cloud Build and view the build provenance in the Security insights side panel within the Google Cloud console. SLSA (pronounced “salsa”) is an end-to-end framework for ensuring the integrity of software artifacts throughout the software supply chain. The SLSA assurance levels provide a scalable compromise between the security benefits and the implementation costs. Level 3 is recommended for moderately to highly critical software and should provide strong, provenance-based security guarantees.
upvoted 3 times
...
cyberpunk21
1 year, 1 month ago
Selected Answer: D
D it is
upvoted 2 times
...
akg001
1 year, 1 month ago
Selected Answer: D
D is correct.
upvoted 2 times
...
Sanjana2020
1 year, 2 months ago
D is correct, I think?
upvoted 4 times
...

Question 181

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 181 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 181
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization operates Virtual Machines (VMs) with only private IPs in the Virtual Private Cloud (VPC) with internet access through Cloud NAT. Everyday, you must patch all VMs with critical OS updates and provide summary reports.

What should you do?

  • A. Validate that the egress firewall rules allow any outgoing traffic. Log in to each VM and execute OS specific update commands. Configure the Cloud Scheduler job to update with critical patches daily for daily updates.
  • B. Copy the latest patches to the Cloud Storage bucket. Log in to each VM, download the patches from the bucket, and install them.
  • C. Assign public IPs to VMs. Validate that the egress firewall rules allow any outgoing traffic. Log in to each VM, and configure a daily cron job to enable for OS updates at night during low activity periods.
  • D. Ensure that VM Manager is installed and running on the VMs. In the OS patch management service, configure the patch jobs to update with critical patches dally.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
i_am_robot
9 months, 4 weeks ago
Selected Answer: D
The best option would be D. Ensure that VM Manager is installed and running on the VMs. In the OS patch management service, configure the patch jobs to update with critical patches daily. This approach allows you to automate the process of patching your VMs with critical OS updates. VM Manager is a suite of tools that offers patch management, configuration management, and inventory management for VM instances. By using VM Manager’s OS patch management service, you can ensure that your VMs are always up-to-date with the latest patches.
upvoted 1 times
...
Xoxoo
1 year ago
Selected Answer: D
VM Manager is a suite of tools that can be used to manage operating systems for large virtual machine (VM) fleets running Windows and Linux on Compute Engine. It helps drive efficiency through automation and reduces the operational burden of maintaining these VM fleets. VM Manager includes several services such as OS patch management, OS inventory management, and OS configuration management. By using VM Manager, you can apply patches, collect operating system information, and install, remove, or auto-update software packages. The suite provides a high level of control and automation for managing large VM fleets on Google Cloud.
upvoted 1 times
...
cyberpunk21
1 year, 1 month ago
Selected Answer: D
D is correct using VM manager we can patch all the VM's
upvoted 2 times
...
pfilourenco
1 year, 2 months ago
Selected Answer: D
D is the correct
upvoted 2 times
...
Sanjana2020
1 year, 2 months ago
A- validate egress firewall rules
upvoted 1 times
...
a190d62
1 year, 2 months ago
Selected Answer: D
VM manager is a suite of tools used to automate managing of the fleet of VMs (including OS patching) https://cloud.google.com/compute/docs/vm-manager
upvoted 3 times
...
K1SMM
1 year, 2 months ago
D vm doesn’t need ip public on cloud nat
upvoted 1 times
...

Question 182

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 182 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 182
Topic #: 1
[All Professional Cloud Security Engineer Questions]

For compliance reporting purposes, the internal audit department needs you to provide the list of virtual machines (VMs) that have critical operating system (OS) security updates available, but not installed. You must provide this list every six months, and you want to perform this task quickly.

What should you do?

  • A. Run a Security Command Center security scan on all VMs to extract a list of VMs with critical OS vulnerabilities every six months.
  • B. Run a gcloud CLI command from the Command Line Interface (CLI) to extract the VM's OS version information every six months.
  • C. Ensure that the Cloud Logging agent is installed on all VMs, and extract the OS last update log date every six months.
  • D. Ensure the OS Config agent is installed on all VMs and extract the patch status dashboard every six months.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
i_am_robot
9 months, 4 weeks ago
Selected Answer: D
The best option would be D. Ensure the OS Config agent is installed on all VMs and extract the patch status dashboard every six months**. The OS Config agent is a service that provides a fast and flexible way to manage operating system configurations across an entire fleet of virtual machines. It can provide information about the patch state of a VM, including which patches are installed, which patches are available, and the severity of the patches. This would allow you to quickly identify VMs that have critical OS security updates available but not installed.
upvoted 2 times
...
gkarthik1919
1 year ago
D is correct. https://cloud.google.com/compute/docs/vm-manager
upvoted 1 times
...
cyberpunk21
1 year, 1 month ago
Selected Answer: D
D is correct
upvoted 1 times
...
cyberpunk21
1 year, 1 month ago
Selected Answer: D
D is correct. C can be correct but not effective as D
upvoted 1 times
...
RuchiMishra
1 year, 1 month ago
Selected Answer: D
D: https://cloud.google.com/compute/docs/os-patch-management#:~:text=A%20patch%20deployment%20is%20initiated,target%20VMs%20to%20start%20patching. Cannot be A, as VM Manager patch compliance feature is in preview for in SCC. https://cloud.google.com/security-command-center/docs/concepts-vulnerabilities-findings
upvoted 2 times
...
pfilourenco
1 year, 2 months ago
Selected Answer: D
I think is D since you can't "run" Security Command Center "security" scan's without vm manager enabled. "If you enable VM Manager with the Security Command Center Premium tier, VM Manager writes its vulnerability reports to Security Command Center by default"
upvoted 1 times
...
Sanjana2020
1 year, 2 months ago
C- Cloud Logging Agent
upvoted 1 times
...
K1SMM
1 year, 2 months ago
A Security command center is integrated with vm manager
upvoted 1 times
...

Question 183

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 183 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 183
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your company conducts clinical trials and needs to analyze the results of a recent study that are stored in BigQuery. The interval when the medicine was taken contains start and stop dates. The interval data is critical to the analysis, but specific dates may identify a particular batch and introduce bias. You need to obfuscate the start and end dates for each row and preserve the interval data.

What should you do?

  • A. Use date shifting with the context set to the unique ID of the test subject.
  • B. Extract the date using TimePartConfig from each date field and append a random month and year.
  • C. Use bucketing to shift values to a predetermined date based on the initial value.
  • D. Use the FFX mode of format preserving encryption (FPE) and maintain data consistency.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
i_am_robot
9 months, 4 weeks ago
Selected Answer: A
The best option would be A. Use date shifting with the context set to the unique ID of the test subject. Date shifting is a technique used to obfuscate date data by shifting all dates in a dataset by a random number of days, while preserving the intervals between the dates. By setting the context to the unique ID of the test subject, you ensure that the same random shift is applied to all dates for a given test subject, preserving the interval data. This method effectively obfuscates the specific dates, reducing the risk of bias, while still allowing for meaningful analysis of the data.
upvoted 2 times
...
Xoxoo
1 year ago
Selected Answer: A
Option A and D works, but the focus here is to preserve the interval data. So option A is more suited in this case. "Date shifting techniques randomly shift a set of dates but preserve the sequence and duration of a period of time. Shifting dates is usually done in context to an individual or an entity. That is, each individual's dates are shifted by an amount of time that is unique to that individual."
upvoted 4 times
...
cyberpunk21
1 year, 1 month ago
Selected Answer: A
Option A is good
upvoted 2 times
...
a190d62
1 year, 2 months ago
Selected Answer: A
A - date shifting. Bucketing is not an option here, because we would lose the order. Encryption is overpowered here https://cloud.google.com/dlp/docs/concepts-date-shifting
upvoted 3 times
...
Sanjana2020
1 year, 2 months ago
A- date shifting.
upvoted 1 times
...

Question 184

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 184 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 184
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You have a highly sensitive BigQuery workload that contains personally identifiable information (PII) that you want to ensure is not accessible from the internet. To prevent data exfiltration, only requests from authorized IP addresses are allowed to query your BigQuery tables.

What should you do?

  • A. Use service perimeter and create an access level based on the authorized source IP address as the condition.
  • B. Use Google Cloud Armor security policies defining an allowlist of authorized IP addresses at the global HTTPS load balancer.
  • C. Use the Restrict Resource Service Usage organization policy constraint along with Cloud Data Loss Prevention (DLP).
  • D. Use the Restrict allowed Google Cloud APIs and services organization policy constraint along with Cloud Data Loss Prevention (DLP).
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
pfilourenco
10 months ago
Selected Answer: A
A is the correct one.
upvoted 1 times
...
b6f53d8
1 year, 2 months ago
A and B will work, but A in better in my opinion
upvoted 1 times
...
i_am_robot
1 year, 3 months ago
Selected Answer: A
The best option would be A. Use service perimeter and create an access level based on the authorized source IP address as the condition. This approach allows you to create a boundary that controls access to Google Cloud resources for services within the same perimeter. By creating an access level based on the authorized source IP address as the condition, you can ensure that only requests from authorized IP addresses are allowed to query your BigQuery tables. This effectively prevents data exfiltration and ensures that your sensitive BigQuery workload is not accessible from the internet.
upvoted 2 times
...
cyberpunk21
1 year, 7 months ago
Selected Answer: A
Option A is correct
upvoted 2 times
...
pfilourenco
1 year, 8 months ago
Selected Answer: A
A is the correct.
upvoted 4 times
...
Sanjana2020
1 year, 8 months ago
I think its A.
upvoted 1 times
...

Question 185

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 185 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 185
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization is moving virtual machines (VMs) to Google Cloud. You must ensure that operating system images that are used across your projects are trusted and meet your security requirements.

What should you do?

  • A. Implement an organization policy to enforce that boot disks can only be created from images that come from the trusted image project.
  • B. Implement an organization policy constraint that enables the Shielded VM service on all projects to enforce the trusted image repository usage.
  • C. Create a Cloud Function that is automatically triggered when a new virtual machine is created from the trusted image repository. Verify that the image is not deprecated.
  • D. Automate a security scanner that verifies that no common vulnerabilities and exposures (CVEs) are present in your trusted image repository.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
MoAk
4 months, 2 weeks ago
Selected Answer: A
The Question mentioned 'trust'. Whilst D can satisfy this to some extent, its not what the Q is trying to get at. Answer is A
upvoted 1 times
...
lanjr01
1 year ago
If org policy to enforce/ensure only trusted boot disk image is used across the projects; un-trusted boot image cannot be used successfully in the first place - - - answer A seems correct as it is a proactive measure and so lees need to scan for common vulnerabilities . On the other hand, the questions can be read as a "lift & shift" effort which seems to suggest virtual machines are moving to Google Cloud without prior security assessment before the move to Google Cloud - --
upvoted 1 times
...
desertlotus1211
1 year, 1 month ago
I'm going to have to change my previous answer... It asked about: ensuring that operating system images that are used across your projects are trusted and meet your security requirements... that will be Answer D not A.
upvoted 1 times
...
desertlotus1211
1 year, 7 months ago
What about Answer D?
upvoted 1 times
desertlotus1211
1 year, 7 months ago
It should be Answer A & D... Image repository is also the image project
upvoted 1 times
desertlotus1211
1 year, 7 months ago
Answer A is correct
upvoted 1 times
...
...
...
cyberpunk21
1 year, 7 months ago
Selected Answer: A
Option A looks more like it so is B but B seems a bit complicated and costly.
upvoted 2 times
...
pfilourenco
1 year, 8 months ago
Selected Answer: A
A is the correct.
upvoted 2 times
...
a190d62
1 year, 8 months ago
Selected Answer: A
it's A A - https://cloud.google.com/compute/docs/images/restricting-image-access
upvoted 4 times
...
Sanjana2020
1 year, 8 months ago
Is A correct?
upvoted 1 times
...

Question 186

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 186 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 186
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You have stored company approved compute images in a single Google Cloud project that is used as an image repository. This project is protected with VPC Service Controls and exists in the perimeter along with other projects in your organization. This lets other projects deploy images from the image repository project. A team requires deploying a third-party disk image that is stored in an external Google Cloud organization. You need to grant read access to the disk image so that it can be deployed into the perimeter.

What should you do?

  • A. Allow the external project by using the organizational policy, constraints/compute.trustedImageProjects.
  • B. 1. Update the perimeter.
    2. Configure the egressTo field to include the external Google Cloud project number as an allowed resource and the serviceName to compute.googleapis.com.
    3. Configure the egressFrom field to set identityType to ANY_IDENTITY.
  • C. 1. Update the perimeter.
    2. Configure the ingressFrom field to set identityType to ANY_IDENTITY.
    3. Configure the ingressTo field to include the external Google Cloud project number as an allowed resource and the serviceName to compute.googleapis.com.
  • D. 1. Update the perimeter.
    2. Configure the egressTo field to set identityType to ANY_IDENTITY.
    3. Configure the egressFrom field to include the external Google Cloud project number as an allowed resource and the serviceName to compute.googleapis.com.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
zanhsieh
3 months, 2 weeks ago
Selected Answer: B
B. See the Google official example below: https://cloud.google.com/vpc-service-controls/docs/secure-data-exchange#grant-access-third-party-compute-engine-disk-image Note that the image mentioned in the question is a Compute Engine image, not a Docker image. A: No. This option meant for the public image, not a private, 3rd party owned image. C: No. This option should be put on the 3rd party image project side. D: No. The egressTo doesn't have identityType field. See the format in: https://cloud.google.com/vpc-service-controls/docs/configure-identity-groups#configure-identity-group-egress
upvoted 1 times
...
Pime13
4 months ago
Selected Answer: B
Option C involves configuring the ingressFrom and ingressTo fields, which are used to control incoming traffic into the perimeter. However, in this scenario, you need to allow outgoing traffic from your VPC Service Controls perimeter to the external project to access the third-party disk image. Option D is not suitable because it incorrectly configures the egressFrom and egressTo fields. Specifically, it sets the identityType to ANY_IDENTITY in the egressTo field, which is not necessary. Instead, you need to specify the external Google Cloud project number as an allowed resource in the egressTo field. Option B correctly configures the egressTo field to include the external project number and the serviceName to compute.googleapis.com, while setting the identityType to ANY_IDENTITY in the egressFrom field. This ensures that the necessary outbound traffic is allowed from your VPC Service Controls perimeter to the external project.
upvoted 1 times
...
pico
4 months, 3 weeks ago
Selected Answer: C
why: VPC Service Controls and Perimeters: VPC Service Controls create perimeters around your resources to control access. You need to explicitly configure how resources can enter or exit this perimeter. Ingress vs. Egress: Since you want to allow a resource (the disk image) from outside the perimeter to be deployed inside, this is an ingress operation. Egress refers to resources moving out of the perimeter. ANY_IDENTITY: This setting allows any authenticated Google Cloud identity to access the resource. This is necessary because the disk image is in a different organization.
upvoted 1 times
...
dija123
6 months, 2 weeks ago
Selected Answer: B
Agree with B
upvoted 2 times
...
desertlotus1211
8 months ago
You're pulling the image in, so you must egress out. Answer b.
upvoted 2 times
...
pbrvgl
10 months, 3 weeks ago
Alternative C. It's about an OUTSIDE project willing to deploy a trusted image WITHIN the perimeter. That's "Ingress", as defined here: https://cloud.google.com/vpc-service-controls/docs/ingress-egress-rules#definition-ingress-egress
upvoted 1 times
...
MaryKey
1 year, 1 month ago
Selected Answer: C
The question asks about ingress. You are not asked to modify external organisation's policy (unless you are!)
upvoted 1 times
...
ArizonaClassics
1 year, 1 month ago
The correct option would be: **B. 1. Update the perimeter. 2. Configure the egressTo field to include the external Google Cloud project number as an allowed resource and the serviceName to compute.googleapis.com. Configure the egressFrom field to set identityType to ANY_IDENTITY.** This approach allows for controlled egress from your project to the external project to get the disk image while maintaining the VPC Service Controls.
upvoted 1 times
...
cyberpunk21
1 year, 1 month ago
Selected Answer: B
External cloud organization so egress not ingress. I choose option B.
upvoted 4 times
...
anshad666
1 year, 1 month ago
Selected Answer: B
A Compute Engine client within a service perimeter calling a Compute Engine create operation where the image resource is outside the perimeter. https://cloud.google.com/vpc-service-controls/docs/ingress-egress-rules#:~:text=Egress%20Refers%20to%20any%20access,resource%20is%20outside%20the%20perimeter.
upvoted 4 times
...
ymkk
1 year, 1 month ago
I choose option C. Since the external disk image needs to be deployed into the perimeter, resources inside the perimeter need read access to the external disk image. This requires configuring ingress rules in the perimeter.
upvoted 4 times
...
ymkk
1 year, 1 month ago
Why not C?
upvoted 1 times
...
pfilourenco
1 year, 2 months ago
Selected Answer: B
B is the correct
upvoted 2 times
...
Alejondri
1 year, 2 months ago
I think It's B
upvoted 1 times
...

Question 187

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 187 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 187
Topic #: 1
[All Professional Cloud Security Engineer Questions]

A service account key has been publicly exposed on multiple public code repositories. After reviewing the logs, you notice that the keys were used to generate short-lived credentials. You need to immediately remove access with the service account.

What should you do?

  • A. Delete the compromised service account.
  • B. Disable the compromised service account key.
  • C. Wait until the service account credentials expire automatically.
  • D. Rotate the compromised service account key.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
a190d62
Highly Voted 1 year, 8 months ago
Selected Answer: A
Normally you would just choose (D) to not break the business continuity. But in this case, when short-lived credentials are created you need to disable/delete service account (disabling service account key doesn't revoke short-lived credentials) https://cloud.google.com/iam/docs/keys-disable-enable#disabling
upvoted 12 times
...
Pime13
Most Recent 4 months ago
Selected Answer: A
Important: Disabling a service account key does not revoke short-lived credentials that were issued based on the key. To revoke a compromised short-lived credential, you must disable or delete the service account that the credential represents. If you do so, any workload that uses the service account will immediately lose access to your resources. https://cloud.google.com/iam/docs/keys-disable-enable#disabling
upvoted 1 times
...
Zek
4 months ago
Selected Answer: A
https://cloud.google.com/iam/docs/keys-disable-enable#disabling Disabling a service account key does not revoke short-lived credentials that were issued based on the key. To revoke a compromised short-lived credential, you must disable or delete the service account that the credential represents.
upvoted 1 times
...
BPzen
4 months, 1 week ago
Selected Answer: B
B. Update the perimeter with egressTo and set identityType to ANY_IDENTITY What it does: Updates the service perimeter to allow egress (outbound) traffic from the perimeter to the external Google Cloud project. egressTo specifies the allowed external resource (e.g., the external project with the disk image). identityType: ANY_IDENTITY allows any identity within the perimeter to make the request. Why it's correct: This is the correct way to allow resources in the perimeter to read from the external project while maintaining VPC Service Controls restrictions. Highly suitable, as it enables access to the third-party disk image while adhering to VPC Service Controls.
upvoted 1 times
MoAk
4 months, 1 week ago
wrong Q bud.
upvoted 1 times
...
...
MoAk
4 months, 3 weeks ago
Selected Answer: A
As per https://cloud.google.com/iam/docs/best-practices-for-managing-service-account-keys#code-repositories
upvoted 1 times
...
DattaHinge
6 months, 2 weeks ago
Selected Answer: B
Disabling the compromised service account key immediately prevents any further unauthorized access
upvoted 1 times
...
glb2
1 year ago
Selected Answer: A
A. Delete the compromised service account
upvoted 1 times
...
CISSP987
1 year, 6 months ago
Selected Answer: B
The best answer is B. Disable the compromised service account key. Disabling the compromised service account key will immediately revoke access to all resources that are using the key. This will prevent any further unauthorized access to your cloud environment. A. Delete the compromised service account. Deleting the compromised service account will also revoke access to all resources that are using the account. However, this will also delete all of the data associated with the account. This may not be an option if you need to preserve the data.
upvoted 2 times
...
ArizonaClassics
1 year, 7 months ago
A. Delete the compromised service account: Deleting the service account will immediately revoke its access, but it may also break systems or services that depend on this service account. This is usually a last-resort measure and could be disruptive to services using the account legitimately.
upvoted 2 times
...
cyberpunk21
1 year, 7 months ago
Selected Answer: A
To revoke short-lived credentials service account, need to be deleted.
upvoted 2 times
...
ymkk
1 year, 7 months ago
Selected Answer: A
I choose option A. Disabling a service account key does not revoke short-lived credentials that were issued based on the key. To revoke a compromised short-lived credential, must delete the service account that the credential represents. If you do so, any workload that uses the service account will immediately lose access to your resources.
upvoted 3 times
nah99
4 months, 2 weeks ago
Same warning is showed on delete page docs https://cloud.google.com/iam/docs/keys-create-delete#deleting
upvoted 1 times
nah99
4 months, 2 weeks ago
nvm that's for deleting the key... so yeah option A
upvoted 1 times
...
...
...
akg001
1 year, 8 months ago
A- is correct. https://cloud.google.com/iam/docs/keys-disable-enable#:~:text=Important%3A%20Disabling%20a%20service%20account,account%20that%20the%20credential%20represents.
upvoted 2 times
...
Sanjana2020
1 year, 8 months ago
Why not B?
upvoted 2 times
cyberpunk21
1 year, 7 months ago
disabling service account key doesn't revoke short-lived credentials
upvoted 3 times
...
...

Question 188

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 188 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 188
Topic #: 1
[All Professional Cloud Security Engineer Questions]

A company is using Google Kubernetes Engine (GKE) with container images of a mission-critical application. The company wants to scan the images for known security issues and securely share the report with the security team without exposing them outside Google Cloud.

What should you do?

  • A. 1. Enable Container Threat Detection in the Security Command Center Premium tier.
    2. Upgrade all clusters that are not on a supported version of GKE to the latest possible GKE version.
    3. View and share the results from the Security Command Center.
  • B. 1. Use an open source tool in Cloud Build to scan the images.
    2. Upload reports to publicly accessible buckets in Cloud Storage by using gsutil.
    3. Share the scan report link with your security department.
  • C. 1. Enable vulnerability scanning in the Artifact Registry settings.
    2. Use Cloud Build to build the images.
    3. Push the images to the Artifact Registry for automatic scanning.
    4. View the reports in the Artifact Registry.
  • D. 1. Get a GitHub subscription.
    2. Build the images in Cloud Build and store them in GitHub for automatic scanning.
    3. Download the report from GitHub and share with the Security Team.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
espressoboy
Highly Voted 1 year ago
C Seems like the best fit. I initially chose A but: "The service evaluates all changes and remote access attempts to detect runtime attacks in near-real time." : https://cloud.google.com/security-command-center/docs/concepts-container-threat-detection-overview This has nothing to do with KNOWN security Vulns in images
upvoted 6 times
...
Pime13
Most Recent 4 months ago
Selected Answer: C
Option A involves enabling Container Threat Detection in the Security Command Center Premium tier, upgrading clusters, and viewing and sharing results from the Security Command Center. While this option provides robust threat detection and security insights, it is more focused on detecting threats and anomalies rather than specifically scanning container images for known vulnerabilities. Option C is more directly aligned with the requirement to scan container images for known security issues and securely share the report within Google Cloud. It leverages the Artifact Registry's built-in vulnerability scanning feature, which is specifically designed for this purpose.
upvoted 1 times
...
dija123
6 months, 2 weeks ago
Selected Answer: C
100% C
upvoted 1 times
...
Andrei_Z
1 year, 1 month ago
Selected Answer: C
it is C
upvoted 1 times
...
ArizonaClassics
1 year, 1 month ago
C. Enable vulnerability scanning in Artifact Registry, use Cloud Build, push images for scanning, view reports: This option fulfills all the requirements. It scans images for vulnerabilities using Google Cloud's Artifact Registry and allows viewing of reports securely within the Google Cloud environment. Cloud Build can also be used to build the images before they are pushed for scanning, which adds an extra layer of validation.
upvoted 2 times
...
cyberpunk21
1 year, 1 month ago
Selected Answer: C
i am going with option C all things considered like cost, time and all. option A sounds sound but to implement we need to update the tier and the security issues are already known so not worth it with option C we can do vuln scan without paying extra
upvoted 2 times
...
ymkk
1 year, 1 month ago
Selected Answer: A
https://cloud.google.com/security-command-center/docs/concepts-container-threat-detection-overview
upvoted 2 times
Nachtwaker
7 months, 1 week ago
Don't agree, should be C since it is requesting scans from images (so not running container images). The images are static, stored in container registry, not (yet) deployed in GKE.
upvoted 1 times
...
...
a190d62
1 year, 2 months ago
Selected Answer: C
C: B & D are out due to fact that exposes the results of the scan A & C remains - but to be honest I don't see how updating GKE to the latest version (A) would provide me better vulnerability scan result
upvoted 2 times
akilaz
1 year, 1 month ago
"To detect potential threats to your containers, make sure that your clusters are on a supported version of Google Kubernetes Engine (GKE)" https://cloud.google.com/security-command-center/docs/how-to-use-container-threat-detection Additionaly Answer C doesn't include sharing the report. So in my opinion A
upvoted 3 times
...
a190d62
1 year, 2 months ago
and (never forget about it people) link: https://cloud.google.com/artifact-registry/docs/analysis
upvoted 1 times
...
...

Question 189

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 189 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 189
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your application is deployed as a highly available, cross-region solution behind a global external HTTP(S) load balancer. You notice significant spikes in traffic from multiple IP addresses, but it is unknown whether the IPs are malicious. You are concerned about your application's availability. You want to limit traffic from these clients over a specified time interval.

What should you do?

  • A. Configure a throttle action by using Google Cloud Armor to limit the number of requests per client over a specified time interval.
  • B. Configure a rate_based_ban action by using Google Cloud Armor and set the ban_duration_sec parameter to the specified lime interval.
  • C. Configure a firewall rule in your VPC to throttle traffic from the identified IP addresses.
  • D. Configure a deny action by using Google Cloud Armor to deny the clients that issued too many requests over the specified time interval.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Xoxoo
6 months, 3 weeks ago
Selected Answer: A
To limit traffic from the identified IP addresses over a specified time interval, you should configure a throttle action by using Google Cloud Armor. This will limit the number of requests per client over a specified time interval, which can help prevent your application from being overwhelmed by traffic spikes. Option B is not recommended because it would ban the clients that issue too many requests over the specified time interval, which might not be desirable if the clients are legitimate. Option C is not recommended because it would throttle traffic from all IP addresses that match the firewall rule, which might not be desirable if some of the IP addresses are legitimate. Option D is not recommended because it would deny the clients that issue too many requests over the specified time interval, which might not be desirable if the clients are legitimate. Therefore, Option A is the most appropriate choice for limiting traffic from multiple IP addresses over a specified time interval.
upvoted 2 times
...
ArizonaClassics
7 months, 1 week ago
When dealing with potential DDoS attacks or unexpected spikes in traffic, it's essential to handle the situation carefully to maintain the availability of your application. Here are the options you have: A. Configure a throttle action by using Google Cloud Armor: Google Cloud Armor allows you to define security policies that can throttle clients based on the number of incoming requests over a certain time period. This ensures that legitimate users are not completely blocked while also preventing any one client from overloading the system.
upvoted 1 times
...
cyberpunk21
7 months, 3 weeks ago
Selected Answer: A
All can be done but option A is correct cuz a sentence "number of requests per client."
upvoted 2 times
...
a190d62
8 months, 1 week ago
Selected Answer: A
A you want to limit, not ban traffic https://cloud.google.com/armor/docs/rate-limiting-overview#throttle-traffic
upvoted 4 times
...
K1SMM
8 months, 1 week ago
A https://cloud.google.com/blog/products/identity-security/announcing-new-cloud-armor-rate-limiting-adaptive-protection-and-bot-defense
upvoted 1 times
...

Question 190

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 190 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 190
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization is using Active Directory and wants to configure Security Assertion Markup Language (SAML). You must set up and enforce single sign-on (SSO) for all users.

What should you do?

  • A. 1. Create a new SAML profile.
    2. Populate the sign-in and sign-out page URLs.
    3. Upload the X.509 certificate.
    4. Configure Entity ID and ACS URL in your IdP.
  • B. 1. Configure prerequisites for OpenID Connect (OIDC) in your Active Directory (AD) tenant.
    2. Verify the AD domain.
    3. Decide which users should use SAML.
    4. Assign the pre-configured profile to the select organizational units (OUs) and groups.
  • C. 1. Create a new SAML profile.
    2. Upload the X.509 certificate.
    3. Enable the change password URL.
    4. Configure Entity ID and ACS URL in your IdP.
  • D. 1. Manage SAML profile assignments.
    2. Enable OpenID Connect (OIDC) in your Active Directory (AD) tenant.
    3. Verify the domain.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
ArizonaClassics
7 months, 1 week ago
When configuring SAML-based Single Sign-On (SSO) in an organization that's using Active Directory, the general steps would involve setting up a SAML profile, specifying the necessary URLs for sign-in and sign-out processes, uploading an X.509 certificate for secure communication, and setting up the Entity ID and Assertion Consumer Service (ACS) URL in the Identity Provider (which in this case would be Active Directory). A. Create a new SAML profile, populate URLs, upload X.509 certificate, configure Entity ID and ACS URL: This option comprehensively covers the steps necessary for setting up SAML-based SSO.
upvoted 2 times
...
cyberpunk21
7 months, 3 weeks ago
Selected Answer: A
Option A follows right steps
upvoted 2 times
...
a190d62
8 months, 1 week ago
Selected Answer: A
A you need to enter sign-in/sign-out page URL https://support.google.com/cloudidentity/answer/12032922?hl=en (Configure the SSO profile for your org)
upvoted 4 times
...

Question 191

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 191 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 191
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Employees at your company use their personal computers to access your organization's Google Cloud console. You need to ensure that users can only access the Google Cloud console from their corporate-issued devices and verify that they have a valid enterprise certificate.

What should you do?

  • A. Implement an Access Policy in BeyondCorp Enterprise to verify the device certificate. Create an access binding with the access policy just created.
  • B. Implement a VPC firewall policy. Activate packet inspection and create an allow rule to validate and verify the device certificate.
  • C. Implement an organization policy to verify the certificate from the access context.
  • D. Implement an Identity and Access Management (IAM) conditional policy to verify the device certificate.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Bettoxicity
6 months, 1 week ago
Selected Answer: A
BeyondCorp and Access Policies: BeyondCorp is a Google Cloud security framework that focuses on zero-trust principles. Access Policies within BeyondCorp allow you to define granular access controls based on various attributes, including device certificates.
upvoted 1 times
...
uiuiui
11 months, 1 week ago
Selected Answer: A
must be A
upvoted 1 times
...
Xoxoo
1 year ago
Selected Answer: A
Employees at your company use their personal computers to access your organization's Google Cloud console. You need to ensure that users can only access the Google Cloud console from their corporate-issued devices and verify that they have a valid enterprise certificate. What should you do? A. Implement an Access Policy in BeyondCorp Enterprise to verify the device certificate. Create an access binding with the access policy just created. B. Implement a VPC firewall policy. Activate packet inspection and create an allow rule to validate and verify the device certificate. C. Implement an organization policy to verify the certificate from the access context. D. Implement an Identity and Access Management (IAM) conditional policy to verify the device certificate.
upvoted 1 times
...
ArizonaClassics
1 year, 1 month ago
A. Implement an Access Policy in BeyondCorp Enterprise to verify the device certificate. Create an access binding with the access policy just created. This approach is designed to enforce zero-trust access policies, making it a strong fit for the stated needs of only allowing access from corporate-issued devices with valid enterprise certificates.
upvoted 3 times
...
cyberpunk21
1 year, 1 month ago
Only option A speaks about device here remaining all false
upvoted 1 times
...
pfilourenco
1 year, 2 months ago
Selected Answer: A
A is the correct
upvoted 2 times
...
K1SMM
1 year, 2 months ago
A https://cloud.google.com/beyondcorp?hl=pt-br
upvoted 3 times
...

Question 192

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 192 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 192
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization is rolling out a new continuous integration and delivery (CI/CD) process to deploy infrastructure and applications in Google Cloud. Many teams will use their own instances of the CI/CD workflow. It will run on Google Kubernetes Engine (GKE). The CI/CD pipelines must be designed to securely access Google Cloud APIs.

What should you do?

  • A. 1. Create two service accounts, one for the infrastructure and one for the application deployment.
    2. Use workload identities to let the pods run the two pipelines and authenticate with the service accounts.
    3. Run the infrastructure and application pipelines in separate namespaces.
  • B. 1. Create a dedicated service account for the CI/CD pipelines.
    2. Run the deployment pipelines in a dedicated nodes pool in the GKE cluster.
    3. Use the service account that you created as identity for the nodes in the pool to authenticate to the Google Cloud APIs.
  • C. 1. Create individual service accounts for each deployment pipeline.
    2. Add an identifier for the pipeline in the service account naming convention.
    3. Ensure each pipeline runs on dedicated pods.
    4. Use workload identity to map a deployment pipeline pod with a service account.
  • D. 1. Create service accounts for each deployment pipeline.
    2. Generate private keys for the service accounts.
    3. Securely store the private keys as Kubernetes secrets accessible only by the pods that run the specific deploy pipeline.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
7f97f9f
1 month, 2 weeks ago
Selected Answer: C
A is a very strong option. Using separate service accounts for infrastructure and application deployments follows the principle of least privilege. Workload Identity is the recommended way to securely authenticate GKE pods with Google Cloud APIs. Separate namespaces add an extra layer of isolation. However, C is the most secure and granular approach. Creating individual service accounts per pipeline follows the principle of least privilege. Workload Identity ensures secure authentication. This is the best answer.
upvoted 3 times
...
JohnDohertyDoe
3 months, 2 weeks ago
Selected Answer: C
Granular permissions per deployment pipeline would allow you to separate permissions based on the application teams. Additionally you would want to avoid container escapes by ensuring each deployment runs in a different pod. While A makes it simpler, C is better.
upvoted 2 times
...
Andrei_Z
7 months, 1 week ago
Selected Answer: D
it is D
upvoted 1 times
espressoboy
6 months, 3 weeks ago
https://cloud.google.com/kubernetes-engine/docs/concepts/security-overview#giving_pods_access_to_resources
upvoted 1 times
...
...
GCBC
7 months, 1 week ago
Selected Answer: A
Ans is A, 2 SAs - one for infra and one for deployment
upvoted 3 times
...
cyberpunk21
7 months, 3 weeks ago
Selected Answer: A
A is correct
upvoted 2 times
...
alkaloid
8 months, 1 week ago
I'll go with A. https://cloud.google.com/kubernetes-engine/docs/concepts/security-overview#giving_pods_access_to_resources
upvoted 1 times
...
pfilourenco
8 months, 1 week ago
Selected Answer: A
A is the correct, use workload identities and separeted namesapaces.
upvoted 2 times
...

Question 193

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 193 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 193
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization's Customers must scan and upload the contract and their driver license into a web portal in Cloud Storage. You must remove all personally identifiable information (PII) from files that are older than 12 months. Also, you must archive the anonymized files for retention purposes.

What should you do?

  • A. Set a time to live (TTL) of 12 months for the files in the Cloud Storage bucket that removes PII and moves the files to the archive storage class.
  • B. Create a Cloud Data loss Prevention (DLP) inspection job that de-identifies PII in files created more than 12 months ago and archives them to another Cloud Storage bucket. Delete the original files.
  • C. Configure the Autoclass feature of the Cloud Storage bucket to de-identify PII. Archive the files that are older than 12 months. Delete the original files.
  • D. Schedule a Cloud Key Management Service (KMS) rotation period of 12 months for the encryption keys of the Cloud Storage files containing PII to de-identify them. Delete the original keys.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
K1SMM
Highly Voted 1 year, 2 months ago
B is the correct ! https://cloud.google.com/dlp/docs/deidentify-storage?hl=pt-br
upvoted 5 times
...
Bettoxicity
Most Recent 6 months, 1 week ago
Selected Answer: B
- Cloud DLP is specifically designed to detect and de-identify sensitive data like PII. You can configure an inspection job to target files older than 12 months and remove PII before archiving. - DLP can anonymize the files and store them in a separate Cloud Storage bucket for archival purposes, ensuring compliance with data retention requirements. - After anonymization, the original files with PII can be deleted securely, minimizing the risk of exposure.
upvoted 1 times
...
cyberpunk21
1 year, 1 month ago
Selected Answer: B
B is accurate
upvoted 2 times
...
anshad666
1 year, 1 month ago
Selected Answer: B
I'll go with B
upvoted 1 times
...
ITIFR78
1 year, 1 month ago
Selected Answer: B
B should be ok
upvoted 2 times
...

Question 194

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 194 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 194
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You plan to synchronize identities to Cloud Identity from a third-party identity provider (IdP). You discovered that some employees used their corporate email address to set up consumer accounts to access Google services. You need to ensure that the organization has control over the configuration, security, and lifecycle of these consumer accounts.

What should you do? (Choose two.)

  • A. Mandate that those corporate employees delete their unmanaged consumer accounts.
  • B. Reconcile accounts that exist in Cloud Identity but not in the third-party IdP.
  • C. Evict the unmanaged consumer accounts in the third-party IdP before you sync identities.
  • D. Use Google Cloud Directory Sync (GCDS) to migrate the unmanaged consumer accounts' emails as user aliases.
  • E. Use the transfer tool to invite those corporate employees to transfer their unmanaged consumer accounts to the corporate domain.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Mr_MIXER007
7 months, 1 week ago
Selected Answer: E
Two answers should be chosen, so BE.
upvoted 2 times
...
irmingard_examtopics
12 months ago
Selected Answer: E
Two answers should be chosen, so BE.
upvoted 2 times
...
Bettoxicity
1 year ago
BE - "Reconcile Existing Accounts" refers to the process of comparing and aligning accounts between two systems. - "Transfer Tool" is the official method recommended by Google to convert unmanaged consumer accounts into managed accounts within your domain. It allows you to invite employees to migrate their accounts, giving your organization control over configuration, security, and lifecycle.
upvoted 3 times
...
Andrei_Z
1 year, 7 months ago
Selected Answer: B
BE look like the correct answers
upvoted 2 times
...
ArizonaClassics
1 year, 7 months ago
BE satisfies it
upvoted 2 times
...
cyberpunk21
1 year, 7 months ago
Selected Answer: B
B & E are correct
upvoted 1 times
...
ITIFR78
1 year, 7 months ago
Selected Answer: B
BE https://cloud.google.com/architecture/identity/reconciling-orphaned-managed-user-accounts
upvoted 2 times
...
Simon6666
1 year, 7 months ago
BE https://cloud.google.com/architecture/identity/reconciling-orphaned-managed-user-accounts
upvoted 3 times
...
akg001
1 year, 8 months ago
Selected Answer: B
B & E, To ensure control over the configuration, security, and lifecycle of consumer accounts created with corporate email addresses, you should reconcile accounts that exist in Cloud Identity but not in the third-party IdP (B). This helps to align accounts and ensure consistent management. Additionally, you can use the transfer tool to invite employees to transfer their unmanaged consumer accounts to the corporate domain (E), which allows you to bring these accounts under the organization's control in Cloud Identity.
upvoted 4 times
...
rmoss25
1 year, 8 months ago
E. https://support.google.com/a/answer/6178640?hl=en
upvoted 1 times
...
Sanjana2020
1 year, 8 months ago
B and E
upvoted 3 times
...
K1SMM
1 year, 8 months ago
E use transfer tool
upvoted 1 times
...

Question 195

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 195 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 195
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are auditing all your Google Cloud resources in the production project. You want to identify all principals who can change firewall rules.

What should you do?

  • A. Use Policy Analyzer to query the permissions compute.firewalls.get or compute.firewalls.list.
  • B. Use Firewall Insights to understand your firewall rules usage patterns.
  • C. Reference the Security Health Analytics – Firewall Vulnerability Findings in the Security Command Center.
  • D. Use Policy Analyzer to query the permissions compute.firewalls.create or compute.firewalls.update or compute.firewalls.delete.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
dija123
6 months, 2 weeks ago
Selected Answer: D
D is correct
upvoted 1 times
...
ArizonaClassics
1 year, 1 month ago
Use Policy Analyzer to query the permissions compute.firewalls.create or compute.firewalls.update or compute.firewalls.delete.
upvoted 1 times
...
cyberpunk21
1 year, 1 month ago
Selected Answer: D
D is the option it's a direct question
upvoted 2 times
...
anshad666
1 year, 1 month ago
Selected Answer: D
Must be D
upvoted 2 times
...
akg001
1 year, 1 month ago
Selected Answer: D
D- To identify all principals who can change firewall rules, you should use Policy Analyzer to query for the permissions related to creating, updating, or deleting firewall rules. These permissions are usually associated with compute.firewalls.create, compute.firewalls.update, and compute.firewalls.delete. By checking which principals have these permissions, you can determine who has the ability to change firewall rules in your Google Cloud project.
upvoted 2 times
...
alkaloid
1 year, 2 months ago
Selected Answer: D
D. You can use the Policy Analyzer to check which resources within your organization a principal has a certain roles or permissions on. To get this information, create a query that includes the principal whose access you want to analyze and one or more permissions or roles that you want to check for. https://cloud.google.com/policy-intelligence/docs/analyze-iam-policies#:~:text=You%20can%20use%20the%20Policy%20Analyzer%20to%20check%20which%20resources,you%20want%20to%20check%20for.
upvoted 2 times
...
K1SMM
1 year, 2 months ago
D is correct!
upvoted 4 times
...

Question 196

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 196 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 196
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization previously stored files in Cloud Storage by using Google Managed Encryption Keys (GMEK), but has recently updated the internal policy to require Customer Managed Encryption Keys (CMEK). You need to re-encrypt the files quickly and efficiently with minimal cost.

What should you do?

  • A. Reupload the files to the same Cloud Storage bucket specifying a key file by using gsutil.
  • B. Encrypt the files locally, and then use gsutil to upload the files to a new bucket.
  • C. Copy the files to a new bucket with CMEK enabled in a secondary region.
  • D. Change the encryption type on the bucket to CMEK, and rewrite the objects.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
pradoUA
1 year ago
Selected Answer: D
D is the correct answer
upvoted 1 times
...
ArizonaClassics
1 year, 1 month ago
The most efficient and cost-effective approach to meet your requirements would be: D. Change the encryption type on the bucket to CMEK, and rewrite the objects. Rewriting the objects in-place within the same bucket, specifying the new CMEK for encryption, allows you to re-encrypt the data without downloading and re-uploading it, thus minimizing costs and time.
upvoted 1 times
...
cyberpunk21
1 year, 1 month ago
Selected Answer: D
D is the option it's a direct question
upvoted 2 times
...
RuchiMishra
1 year, 1 month ago
Selected Answer: D
https://cloud.google.com/storage/docs/encryption/using-customer-managed-keys
upvoted 2 times
...
arpgaur
1 year, 1 month ago
re-writing the objects is not quick and efficient. Option D is incorrect. C. Copy the files to a new bucket with CMEK enabled in a secondary region.Option C is the most efficient and cost-effective solution. It would create a new bucket with CMEK enabled in a secondary region. The files would be copied to the new bucket, and the encryption type would be changed to CMEK. This would allow the files to be accessed using CMEK, while minimizing the impact on performance and availability.
upvoted 3 times
ymkk
1 year, 1 month ago
Copying the files to a new bucket in a secondary region would incur data egress charges and take time.
upvoted 1 times
Crotofroto
9 months, 2 weeks ago
Less time than re-writing and no egress cost as nothing is exiting GCP.
upvoted 1 times
...
...
...
akg001
1 year, 1 month ago
option D- y changing the encryption type on the bucket to CMEK and rewriting the objects, you can efficiently re-encrypt the existing files in Cloud Storage using Customer Managed Encryption Keys (CMEK). This option avoids the need to reupload or copy the files and allows you to apply the new encryption policy to the existing objects in the bucket.
upvoted 1 times
...
K1SMM
1 year, 2 months ago
I think D https://cloud.google.com/storage/docs/encryption/using-customer-managed-keys?hl=pt-br
upvoted 2 times
...

Question 197

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 197 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 197
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You run applications on Cloud Run. You already enabled container analysis for vulnerability scanning. However, you are concerned about the lack of control on the applications that are deployed. You must ensure that only trusted container images are deployed on Cloud Run.

What should you do? (Choose two.)

  • A. Enable Binary Authorization on the existing Cloud Run service.
  • B. Set the organization policy constraint constraints/run.allowedBinaryAuthorizationPolicies to the list or allowed Binary Authorization policy names.
  • C. Enable Binary Authorization on the existing Kubernetes cluster.
  • D. Use Cloud Run breakglass to deploy an image that meets the Binary Authorization policy by default.
  • E. Set the organization policy constraint constraints/compute.trustedImageProjects to the list of projects that contain the trusted container images.
Show Suggested Answer Hide Answer
Suggested Answer: AB 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
chimz2002
Highly Voted 1 year ago
Selected Answer: AB
options A and B are right. video explanation, feel free to watch from the beginning - https://youtu.be/b7GdpEEvGDQ?t=249
upvoted 5 times
...
zanhsieh
Most Recent 3 months, 2 weeks ago
Selected Answer: AB
AB. C: No. The question doesn't have "the existing Kubernetes cluster". D: No. Why breakglass if we already took opt A? E: No. "compute.trustedImageProjects" is for Compute Engine. See the link below: https://cloud.google.com/compute/docs/images/restricting-image-access#trusted_images https://cloud.google.com/binary-authorization/docs/run/requiring-binauthz-cloud-run#set_the_organization_policy
upvoted 1 times
...
desertlotus1211
9 months, 1 week ago
https://youtu.be/b7GdpEEvGDQ?t=249 this video exaplins at 4:30 into it
upvoted 2 times
...
Xoxoo
1 year ago
Selected Answer: AE
To ensure that only trusted container images are deployed on Cloud Run, you should take the following actions: Option A: Enable Binary Authorization on the existing Cloud Run service. Binary Authorization allows you to create policies that specify which container images are allowed to be deployed. By enabling Binary Authorization on your Cloud Run service, you can enforce these policies, ensuring that only trusted container images are deployed. Option E: Set the organization policy constraint constraints/compute.trustedImageProjects to the list of projects that contain the trusted container images. This organization policy constraint allows you to specify which projects are considered trusted sources of container images. By setting this constraint, you can control where trusted container images can be sourced from.
upvoted 1 times
Xoxoo
1 year ago
Options B, C, and D are not directly related to controlling container image deployments on Cloud Run: Option B: This option appears to refer to a policy constraint related to Cloud Run but doesn't specifically address Binary Authorization, which is the tool for enforcing image trust. Option C: Enabling Binary Authorization on a Kubernetes cluster is useful for controlling container image deployments in Kubernetes, but it doesn't directly apply to Cloud Run, which is a different serverless container platform. Option D: The concept of "Cloud Run breakglass" is not a standard term or method for controlling image deployments. Binary Authorization is the recommended approach for enforcing container image trust.
upvoted 1 times
...
Xoxoo
1 year ago
Option E: Set the organization policy constraint constraints/compute.trustedImageProjects to the list of projects that contain the trusted container images. This organization policy constraint allows you to specify which projects are considered trusted sources of container images. By setting this constraint, you can control where trusted container images can be sourced from.
upvoted 1 times
...
...
ArizonaClassics
1 year, 1 month ago
AE Satisfies the concept
upvoted 1 times
...
anshad666
1 year, 1 month ago
Selected Answer: AB
look like AB https://cloud.google.com/binary-authorization/docs/run/requiring-binauthz-cloud-run
upvoted 2 times
...
cyberpunk21
1 year, 1 month ago
Selected Answer: AE
A speaks about authorization and E talks about using trusted images so AE are correct
upvoted 1 times
...
Mithung30
1 year, 2 months ago
Selected Answer: AB
Correct answer is AB https://cloud.google.com/binary-authorization/docs/run/requiring-binauthz-cloud-run#set_the_organization_policy
upvoted 2 times
...
hykdlidesd
1 year, 2 months ago
I think AB cause E is for compute engine
upvoted 1 times
...
pfilourenco
1 year, 2 months ago
Selected Answer: AB
A & B: https://cloud.google.com/binary-authorization/docs/run/requiring-binauthz-cloud-run
upvoted 2 times
...
K1SMM
1 year, 2 months ago
AE https://cloud.google.com/binary-authorization/docs/configuring-policy-console?hl=pt-br#cloud-run
upvoted 2 times
...

Question 198

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 198 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 198
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization has on-premises hosts that need to access Google Cloud APIs. You must enforce private connectivity between these hosts, minimize costs, and optimize for operational efficiency.

What should you do?

  • A. Set up VPC peering between the hosts on-premises and the VPC through the internet.
  • B. Route all on-premises traffic to Google Cloud through an IPsec VPN tunnel to a VPC with Private Google Access enabled.
  • C. Enforce a security policy that mandates all applications to encrypt data with a Cloud Key Management Service (KMS) key before you send it over the network.
  • D. Route all on-premises traffic to Google Cloud through a dedicated or Partner Interconnect to a VPC with Private Google Access enabled.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
KLei
3 months, 3 weeks ago
Selected Answer: B
https://cloud.google.com/vpc/docs/configure-private-google-access-hybrid Private Google Access for on-premises hosts provides a way for on-premises systems to connect to Google APIs and services by routing traffic through a Cloud VPN tunnel or a VLAN attachment for Cloud Interconnect. Private Google Access for on-premises hosts is an alternative to connecting to Google APIs and services over the internet.
upvoted 1 times
...
Pime13
4 months ago
Selected Answer: D
While Option B can be cost-effective and simpler to set up initially, Option D provides a more robust, reliable, and scalable solution for private connectivity to Google Cloud APIs. If you have any more questions or need further clarification, feel free to ask!
upvoted 1 times
...
Bettoxicity
1 year ago
Selected Answer: D
Why not B?: "IPsec VPN with Public Google Access": While an IPsec VPN can provide some level of security, it still relies on the public internet for connectivity, introducing potential security risks and higher costs compared to an Interconnect. Additionally, Public Google Access exposes API endpoints to the internet, which might not be desirable.
upvoted 1 times
...
cyberpunk21
1 year, 7 months ago
Selected Answer: B
B is punju option as they cost way more less than other options
upvoted 3 times
...
RuchiMishra
1 year, 8 months ago
Selected Answer: B
VPN tunnel is less costly than interconnect
upvoted 4 times
akg001
1 year, 7 months ago
I think it optimize operational efficiency too as in Interconnect we have more complexity in network security operation. You are right B should be the answer.
upvoted 1 times
...
...
akg001
1 year, 8 months ago
can be D too. as question is asking to optimize for operational efficiency.
upvoted 1 times
akg001
1 year, 7 months ago
Sorry it should B
upvoted 1 times
...
...
K1SMM
1 year, 8 months ago
B less costs
upvoted 1 times
...

Question 199

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 199 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 199
Topic #: 1
[All Professional Cloud Security Engineer Questions]

As part of your organization's zero trust strategy, you use Identity-Aware Proxy (IAP) to protect multiple applications. You need to ingest logs into a Security Information and Event Management (SIEM) system so that you are alerted to possible intrusions.

Which logs should you analyze?

  • A. Data Access audit logs
  • B. Policy Denied audit logs
  • C. Cloud Identity user log events
  • D. Admin Activity audit logs
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
gcp4test
Highly Voted 1 year, 8 months ago
Selected Answer: A
The data_access log name only appears if there was traffic to your resource after you enabled Cloud Audit Logs for IAP. Click to expand the date and time of the access you want to review. Authorized access has a blue i icon. Unauthorized access has an orange !! icon. " https://cloud.google.com/iap/docs/audit-log-howto
upvoted 7 times
...
zanhsieh
Most Recent 3 months, 3 weeks ago
Selected Answer: A
I will choose A. Not B because we won't get the valuable information - it just reports what were denied. We are looking for what were not get denied so those can be formed as alerts.
upvoted 1 times
...
Pime13
4 months ago
Selected Answer: B
B. Policy Denied audit logs Policy Denied audit logs are crucial because they record instances where access to resources was denied based on your IAP policies. These logs can help you identify and investigate unauthorized access attempts, which are critical for detecting potential intrusions. While Data Access and Admin Activity audit logs provide valuable information about resource access and administrative actions, Policy Denied logs specifically highlight security-related events that could indicate malicious activity.
upvoted 2 times
...
BPzen
4 months, 1 week ago
Selected Answer: B
Policy Denied Audit Logs: These logs capture access attempts denied by Identity-Aware Proxy (IAP) policies. They indicate potential unauthorized or suspicious activity, such as users attempting to access resources they are not authorized for. These logs are critical for identifying possible intrusions or misconfigurations in your zero-trust strategy.
upvoted 1 times
...
Mr_MIXER007
7 months, 1 week ago
Selected Answer: A
https://cloud.google.com/iap/docs/audit-log-howto#viewing_audit A
upvoted 1 times
...
3d9563b
8 months, 3 weeks ago
Selected Answer: B
To effectively monitor and detect possible intrusions related to IAP-protected applications, focusing on Policy Denied audit logs provides the most relevant insights into access control and denial events. These logs help you track access violations and unauthorized attempts, aligning with your zero trust strategy and enabling timely alerts in your SIEM system.
upvoted 1 times
...
jujanoso
9 months ago
Selected Answer: B
B. Policy Denied audit logs can show when unauthorized users or devices tried to access protected applications and were blocked, which is crucial for identifying and responding to threats. As part of a zero trust strategy, leveraging Identity-Aware Proxy (IAP) involves closely monitoring and analyzing logs to detect potential intrusions and unauthorized activities.
upvoted 1 times
...
glb2
1 year ago
Selected Answer: B
B. Policy Denied audit logs: These logs contain records of access attempts that were denied by IAP policies. Analyzing these logs can help identify unauthorized access attempts and potential intrusion attempts blocked by IAP.
upvoted 2 times
...
desertlotus1211
1 year, 1 month ago
Answer is B
upvoted 2 times
...
cyberpunk21
1 year, 7 months ago
Selected Answer: A
A is fire
upvoted 2 times
...
Mithung30
1 year, 8 months ago
Selected Answer: A
https://cloud.google.com/iap/docs/audit-log-howto#viewing_audit
upvoted 2 times
...

Question 200

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 200 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 200
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your company must follow industry specific regulations. Therefore, you need to enforce customer-managed encryption keys (CMEK) for all new Cloud Storage resources in the organization called org1.

What command should you execute?

  • A. • organization poli-cy:constraints/gcp.restrictStorageNonCmekServices
    • binding at: org1
    • policy type: allow
    • policy value: all supported services
  • B. • organization policy: con-straints/gcp.restrictNonCmekServices
    • binding at: org1
    • policy type: deny
    • policy value: storage.googleapis.com
  • C. • organization policy: con-straints/gcp.restrictStorageNonCmekServices
    • binding at: org1
    • policy type: deny
    • policy value: storage.googleapis.com
  • D. • organization policy: con-straints/gcp.restrictNonCmekServices
    • binding at: org1
    • policy type: allow
    • policy value: storage.googleapis.com
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
KLei
3 months, 2 weeks ago
Selected Answer: B
Require CMEK protection To require CMEK protection for your organization, configure the constraints/gcp.restrictNonCmekServices organization policy. As a list constraint, the accepted values for this constraint are Google Cloud service names (for example, bigquery.googleapis.com). Use this constraint by providing a list of Google Cloud service names and setting the constraint to Deny. This configuration blocks the creation of resources in these services if the resource is not protected by CMEK. In other words, requests to create a resource in the service don't succeed without specifying a Cloud KMS key. https://cloud.google.com/kms/docs/cmek-org-policy#require-cmek I cannot found the so called "restrictStorageNonCmekServices" in Google document
upvoted 1 times
...
BPzen
4 months, 1 week ago
Selected Answer: B
Policy Name: constraints/gcp.restrictNonCmekServices: This policy ensures that resources in specified Google Cloud services (e.g., Cloud Storage) cannot be created without enabling CMEK. It also prevents the removal of CMEK from existing resources.
upvoted 1 times
...
rottzy
1 year, 6 months ago
B. Existing non-CMEK Google Cloud resources must be reconfigured or recreated manually to ensure enforcement. constraints/gcp.restrictNonCmekServices
upvoted 1 times
...
cyberpunk21
1 year, 7 months ago
Selected Answer: B
B is correct
upvoted 1 times
...
anshad666
1 year, 7 months ago
Selected Answer: B
B is the correct answer https://cloud.google.com/kms/docs/cmek-org-policy#require-cmek
upvoted 2 times
...
Mithung30
1 year, 8 months ago
Selected Answer: B
https://cloud.google.com/kms/docs/cmek-org-policy#require-cmek
upvoted 1 times
...
pfilourenco
1 year, 8 months ago
Selected Answer: B
B is the correct: https://cloud.google.com/kms/docs/cmek-org-policy#example-require-cmek-project
upvoted 1 times
...
Mithung30
1 year, 8 months ago
Selected Answer: B
https://cloud.google.com/kms/docs/cmek-org-policy#require-cmek
upvoted 1 times
...
pfilourenco
1 year, 8 months ago
Selected Answer: D
D is the correct: Use this constraint by configuring a list of resource hierarchy indicators and setting the constraint to Allow. https://cloud.google.com/kms/docs/cmek-org-policy#project-constraint
upvoted 1 times
pfilourenco
1 year, 8 months ago
Sry, B is the correct: https://cloud.google.com/kms/docs/cmek-org-policy#example-require-cmek-project
upvoted 1 times
...
...
a190d62
1 year, 8 months ago
Selected Answer: B
B https://cloud.google.com/kms/docs/cmek-org-policy#require-cmek
upvoted 1 times
...

Question 201

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 201 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 201
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your company's Google Cloud organization has about 200 projects and 1,500 virtual machines. There is no uniform strategy for logs and events management, which reduces visibility for your security operations team. You need to design a logs management solution that provides visibility and allows the security team to view the environment's configuration.

What should you do?

  • A. 1. Create a dedicated log sink for each project that is in scope.
    2. Use a BigQuery dataset with time partitioning enabled as a destination of the log sinks.
    3. Deploy alerts based on log metrics in every project.
    4. Grant the role "Monitoring Viewer" to the security operations team in each project.
  • B. 1. Create one log sink at the organization level that includes all the child resources.
    2. Use as destination a Pub/Sub topic to ingest the logs into the security information and event. management (SIEM) on-premises, and ensure that the right team can access the SIEM.
    3. Grant the Viewer role at organization level to the security operations team.
  • C. 1. Enable network logs and data access logs for all resources in the "Production" folder.
    2. Do not create log sinks to avoid unnecessary costs and latency.
    3. Grant the roles "Logs Viewer" and "Browser" at project level to the security operations team.
  • D. 1. Create one sink for the "Production" folder that includes child resources and one sink for the logs ingested at the organization level that excludes child resources.
    2. As destination, use a log bucket with a minimum retention period of 90 days in a project that can be accessed by the security team.
    3. Grant the security operations team the role of Security Reviewer at organization level.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
zanhsieh
3 months, 2 weeks ago
Selected Answer: B
B. A: No. Granting monitoring.viewer to security team doesn't help to see the log since log to BQ. C: No. How does the security team to view logs if no log sink created? This option means no log streaming in. D: No. "90 days retention period" and "Security Reviewer" are not the question asked for.
upvoted 1 times
...
BPzen
4 months, 2 weeks ago
Selected Answer: B
D. Revised for a No-Folder Scenario: Create a single organization-level log sink: Include all child resources (projects) to centralize logging for the entire organization. Configure log filters: If you want to scope the logs (e.g., for "production" projects only), use labels or other identifiers on projects to filter relevant logs into the sink. Destination: Use a log bucket in a dedicated project accessible to the security team. Ensure the log bucket has a minimum retention period of 90 days (or longer if required). Grant Access: Assign the Security Reviewer role to the security operations team at the organization level. This role provides read access to logs across all resources in the organization.
upvoted 1 times
...
b6f53d8
1 year, 2 months ago
Selected Answer: D
B required external on prem SIEM it is not recommended solution
upvoted 1 times
...
b6f53d8
1 year, 2 months ago
For sure not A, but I'm not sure B, because it required external SIEM, in my opinion D is the best option
upvoted 1 times
...
Andrei_Z
1 year, 7 months ago
Selected Answer: B
It is B because you need a SIEM to actually analyse the configurations of the environments
upvoted 2 times
...
ArizonaClassics
1 year, 7 months ago
B. 1. Create one log sink at the organization level that includes all the child resources. 2. Use as destination a Pub/Sub topic to ingest the logs into the security information and event management (SIEM) on-premises, and ensure that the right team can access the SIEM.
upvoted 2 times
...
cyberpunk21
1 year, 7 months ago
Selected Answer: B
B is good
upvoted 1 times
...
pfilourenco
1 year, 8 months ago
Selected Answer: B
B makes sense
upvoted 1 times
...
a190d62
1 year, 8 months ago
Selected Answer: B
B https://github.com/GoogleCloudPlatform/community/blob/master/archived/exporting-security-data-to-your-siem/index.md
upvoted 1 times
...
K1SMM
1 year, 8 months ago
B makes sense cuz viewer role permits view environments configuration
upvoted 1 times
...

Question 202

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 202 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 202
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your Google Cloud organization allows for administrative capabilities to be distributed to each team through provision of a Google Cloud project with Owner role (roles/owner). The organization contains thousands of Google Cloud projects. Security Command Center Premium has surfaced multiple OPEN_MYSQL_PORT findings. You are enforcing the guardrails and need to prevent these types of common misconfigurations.

What should you do?

  • A. Create a hierarchical firewall policy configured at the organization to deny all connections from 0.0.0.0/0.
  • B. Create a hierarchical firewall policy configured at the organization to allow connections only from internal IP ranges.
  • C. Create a Google Cloud Armor security policy to deny traffic from 0.0.0.0/0.
  • D. Create a firewall rule for each virtual private cloud (VPC) to deny traffic from 0.0.0.0/0 with priority 0.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
K1SMM
Highly Voted 1 year, 8 months ago
B - https://cloud.google.com/security-command-center/docs/how-to-remediate-security-health-analytics-findings?hl=pt-br#open_mysql_port
upvoted 6 times
dija123
1 year ago
Link in English: https://cloud.google.com/security-command-center/docs/how-to-remediate-security-health-analytics-findings#open_mysql_port
upvoted 1 times
...
...
BPzen
Most Recent 4 months, 2 weeks ago
Selected Answer: B
The goal is to enforce guardrails and prevent common misconfigurations, such as exposing MySQL to the public internet, while still allowing legitimate access (e.g., internal or authorized sources). A complete block of all traffic (0.0.0.0/0) at the organizational level may be too restrictive. Why Option B is a Better Fit Selective Access: This policy allows connections to MySQL services only from internal IP ranges (e.g., trusted on-premises networks or other VPCs within the organization). By restricting access to authorized ranges, you prevent public exposure without fully disabling MySQL functionality.
upvoted 1 times
...
MoAk
4 months, 2 weeks ago
Selected Answer: B
To be honest the Q in itself is crap. Its not specific enough as it does not mention restricting said firewall rules with the SQl port. However having said this, the other answers are crappier so it must be B.
upvoted 1 times
...
Mr_MIXER007
7 months, 1 week ago
Selected Answer: B
Create a hierarchical firewall policy configured at the organization to allow connections only from internal IP ranges
upvoted 1 times
...
3d9563b
8 months, 3 weeks ago
Selected Answer: A
Creating a hierarchical firewall policy at the organization level to deny all connections from 0.0.0.0/0 is the most efficient, scalable, and manageable solution to enforce guardrails and prevent common misconfigurations like open MySQL ports across a large number of projects.
upvoted 1 times
...
b6f53d8
1 year, 2 months ago
Selected Answer: B
checked with Bard :P
upvoted 2 times
...
Crotofroto
1 year, 3 months ago
Selected Answer: A
The only option that actually blocks access to the MYSQL port is option A. Other rules should be created with higher priority to avoid infrastructure failures. Option B is not correct because it continues to allow unrestricted connections within the VPC, which may pose a risk of lateral movement.
upvoted 4 times
ale_brd_111
1 year, 2 months ago
Open MySQL port Category name in the API: OPEN_MYSQL_PORT Firewall rules that allow any IP address to connect to MySQL ports might expose your MySQL services to attackers. For more information, see VPC firewall rules overview. The MySQL service ports are: TCP - 3306 This finding is generated for vulnerable firewall rules, even if you intentionally disable the rules. Active findings for disabled firewall rules alert you to unsafe configurations that will allow undesired traffic if enabled. To remediate this finding, complete the following steps: Go to the Firewall page in the Google Cloud console. Go to Firewall In the list of firewall rules, click the name of the firewall rule in the finding. Click edit Edit. Under Source IP ranges, delete 0.0.0.0/0. Add specific IP addresses or IP ranges that you want to let connect to the instance. Add specific protocols and ports you want to open on your instance. Click Save.
upvoted 2 times
...
...
Xoxoo
1 year, 6 months ago
Selected Answer: B
Your Google Cloud organization allows for administrative capabilities to be distributed to each team through provision of a Google Cloud project with Owner role (roles/owner). The organization contains thousands of Google Cloud projects. Security Command Center Premium has surfaced multiple OPEN_MYSQL_PORT findings. You are enforcing the guardrails and need to prevent these types of common misconfigurations. What should you do? A. Create a hierarchical firewall policy configured at the organization to deny all connections from 0.0.0.0/0. B. Create a hierarchical firewall policy configured at the organization to allow connections only from internal IP ranges. C. Create a Google Cloud Armor security policy to deny traffic from 0.0.0.0/0. D. Create a firewall rule for each virtual private cloud (VPC) to deny traffic from 0.0.0.0/0 with priority 0.
upvoted 2 times
arpgaur
1 year, 6 months ago
we can all use Gen AI to get answers, but sometime even they give a wrong one or when prompted to change, they'll just go with whatever you're saying which is no reliable. Please provide a an official link along with the answer to verify. this does not help anyone.
upvoted 2 times
Xoxoo
1 year, 6 months ago
I am pretty sure this is more helpful than saying option B is correct or option B makes sense. Instead of calling out you can be more helpful by providing your own link to justify your answer. AI affirm my answers so i am posting here to help others.
upvoted 1 times
...
...
Xoxoo
1 year, 6 months ago
Here's why Option B is the recommended choice: Hierarchical Firewall Policy: A hierarchical firewall policy set at the organization level allows for centralized control and management of firewall rules across all projects within the organization. This ensures consistent security policies and makes it easier to enforce changes uniformly. Allow Internal IP Ranges: By configuring the firewall policy to allow connections only from internal IP ranges, you are implementing a "default deny" rule for external traffic, which is a security best practice. This effectively blocks traffic from 0.0.0.0/0 (anywhere), helping to prevent open ports and unauthorized access.
upvoted 1 times
zanhsieh
3 months, 2 weeks ago
A: No. Deny all incoming from 0.0.0.0/0 is the firewall default ingress setting. C: No. Cloud Armor mostly works for L7 ALB, which is not the question asked here. Also it doesn't cover all org projects. D: No. This will be cumbersome and inefficient for all projects under the org.
upvoted 1 times
...
Xoxoo
1 year, 6 months ago
Options A, C, and D have some drawbacks: Option A (Deny all connections from 0.0.0.0/0) is a strong security measure but could potentially disrupt legitimate traffic if not configured carefully. It's usually recommended to follow the principle of least privilege and explicitly allow only necessary traffic. Option C (Create a Google Cloud Armor security policy to deny traffic from 0.0.0.0/0) is more suitable for web application security and might not be the most effective way to prevent open ports like OPEN_MYSQL_PORT. Option D (Create a firewall rule for each VPC to deny traffic from 0.0.0.0/0 with priority 0) would require creating and managing individual firewall rules for each VPC, which could be cumbersome and less efficient than using a hierarchical firewall policy at the organization level.
upvoted 1 times
...
f983100
1 year, 4 months ago
That make sense, but how I could control what are the ip internal ranges that each owner uses over his project?
upvoted 1 times
...
...
...
Andrei_Z
1 year, 7 months ago
Selected Answer: B
This question is quite weird, none of the option will prevent this type of misconfiguration
upvoted 3 times
...
cyberpunk21
1 year, 7 months ago
Selected Answer: B
B is good
upvoted 1 times
...
pfilourenco
1 year, 8 months ago
Selected Answer: B
B makes sense
upvoted 2 times
...

Question 203

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 203 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 203
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization must comply with the regulation to keep instance logging data within Europe. Your workloads will be hosted in the Netherlands in region europe-west4 in a new project. You must configure Cloud Logging to keep your data in the country.

What should you do?

  • A. Configure the organization policy constraint gcp.resourceLocations to europe-west4.
  • B. Configure log sink to export all logs into a Cloud Storage bucket in europe-west4.
  • C. Create a new log bucket in europe-west4, and redirect the _Default bucket to the new bucket.
  • D. Set the logging storage region to europe-west4 by using the gcloud CLI logging settings update.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
desertlotus1211
Highly Voted 1 year, 7 months ago
Answer is C: https://cloud.google.com/logging/docs/regionalized-logs This guide walks through this process using the example of redirecting all logs to the europe-west1 region. This process involves the following steps: Create a log bucket in the designated region for storing the logs. Redirect the _Default sink to route the logs to the new log bucket. Search for logs in the Logs Explorer. (Optional) Update the log retention period.
upvoted 7 times
ElviraRrr
1 year, 6 months ago
Note: After you create your log bucket, you can't change your bucket's region. If you need a bucket in a different region, you must create a new bucket in that region, redirect the appropriate sinks to the new bucket, and then delete the old bucket. https://cloud.google.com/logging/docs/buckets#create_bucket
upvoted 3 times
...
desertlotus1211
1 year, 3 months ago
The question ask for a NEW bucket. not change the existing.. C is correct
upvoted 1 times
...
espressoboy
1 year, 6 months ago
Is this not project specific though? The command has us specify the project for us to apply this redirect too, e.g.: gcloud logging sinks update _Default \ logging.googleapis.com/projects/logs-test-project/locations/europe-west1/buckets/region-1-logs-bucket If this org is deploying workloads across different projects surely those projects will each have a new _Default log sink? To cover this use case you'll need Option D "https://cloud.google.com/logging/docs/default-settings#view-org-settings" : "if you want to automatically apply a particular storage region to the new _Default and _Required buckets created in your organization, you can configure default resource location" Option D Let's us configure Org level regionalisation for all new _Default Log sinks.
upvoted 2 times
...
...
JohnDohertyDoe
Most Recent 3 months, 2 weeks ago
Selected Answer: D
https://cloud.google.com/sdk/gcloud/reference/logging/settings/update#--storage-location The question talks about a new project, so this is the better solution. If it was an existing project, then it would make sense to create a new log bucket and redirect (Option C).
upvoted 1 times
...
BPzen
4 months, 1 week ago
Selected Answer: C
Why Option C is Correct Log Buckets and Regional Compliance: Cloud Logging allows you to create log buckets in specific regions to comply with data residency requirements. By creating a log bucket in europe-west4, you ensure that all logs are stored within the required region. Redirecting the _Default Bucket: The _Default bucket is used by Cloud Logging to store logs by default. Redirecting logs from the _Default bucket to the newly created regional log bucket ensures that all logs are compliant with the regulation.
upvoted 1 times
...
Mr_MIXER007
7 months, 1 week ago
Selected Answer: C
Create a new log bucket in europe-west4, and redirect the _Default bucket to the new bucket
upvoted 1 times
...
Potatoe2023
11 months, 3 weeks ago
Selected Answer: C
Answer is C according to: https://cloud.google.com/storage/docs/moving-buckets
upvoted 2 times
...
Bettoxicity
1 year ago
Selected Answer: D
D: Granular Control: Using gcloud CLI logging settings update specifically targets the logging storage region. This ensures logs are stored in the desired region (europe-west4) without affecting other settings. Why not C: Creating a new bucket in europe-west4 and redirecting the default bucket wouldn't change the storage region of existing logs. It might only affect future logs written to the new bucket.
upvoted 3 times
...
glb2
1 year ago
Selected Answer: C
https://cloud.google.com/logging/docs/default-settings#specify-region
upvoted 3 times
...
rushi000001
1 year, 1 month ago
Answer C: Log Bucket is on Project level gcloud CLI logging settings update work on Org/Folder level, if we apply on Org/Folder level it will update for all Projects which may using other regions in Europe
upvoted 4 times
...
b6f53d8
1 year, 2 months ago
Selected Answer: C
you need to create new bucket in specific region
upvoted 4 times
...
chimz2002
1 year, 6 months ago
Selected Answer: D
D. Set the logging storage region to Europe-west4 using the gcloud CLI logging settings update. Here's how this option aligns with the requirement: By setting the logging storage region to Europe-west4, you ensure that the log data will be stored in the specified region, which complies with the regulation to keep instance logging data within Europe.
upvoted 1 times
...
ElviraRrr
1 year, 6 months ago
Selected Answer: C
Note: After you create your log bucket, you can't change your bucket's region. If you need a bucket in a different region, you must create a new bucket in that region, redirect the appropriate sinks to the new bucket, and then delete the old bucket. https://cloud.google.com/logging/docs/buckets#create_bucket
upvoted 2 times
...
cyberpunk21
1 year, 7 months ago
Selected Answer: D
D is correct
upvoted 1 times
...
anshad666
1 year, 7 months ago
Selected Answer: D
https://cloud.google.com/logging/docs/default-settings#config-logging
upvoted 1 times
...
RuchiMishra
1 year, 8 months ago
Selected Answer: C
https://cloud.google.com/logging/docs/default-settings#specify-region
upvoted 1 times
...
Mithung30
1 year, 8 months ago
Selected Answer: D
https://cloud.google.com/logging/docs/default-settings#config-logging
upvoted 1 times
...
pfilourenco
1 year, 8 months ago
Selected Answer: D
D is the correct option: gcloud logging settings update --organization=ORGANIZATION_ID --storage-location=LOCATION from: https://cloud.google.com/logging/docs/default-settings#config-logging
upvoted 2 times
...
a190d62
1 year, 8 months ago
Selected Answer: C
C https://cloud.google.com/logging/docs/regionalized-logs
upvoted 1 times
...

Question 204

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 204 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 204
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are using Security Command Center (SCC) to protect your workloads and receive alerts for suspected security breaches at your company. You need to detect cryptocurrency mining software.

Which SCC service should you use?

  • A. Virtual Machine Threat Detection
  • B. Container Threat Detection
  • C. Rapid Vulnerability Detection
  • D. Web Security Scanner
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
ArizonaClassics
7 months, 1 week ago
How VM Threat Detection works VM Threat Detection is a managed service that scans enabled Compute Engine projects and virtual machine (VM) instances to detect potentially malicious applications running in VMs, such as cryptocurrency mining software and kernel-mode rootkits. option A
upvoted 1 times
...
Mithung30
8 months, 1 week ago
Selected Answer: A
https://cloud.google.com/security-command-center/docs/concepts-vm-threat-detection-overview#overview
upvoted 3 times
...
pfilourenco
8 months, 1 week ago
Selected Answer: A
A - https://cloud.google.com/security-command-center/docs/how-to-use-vm-threat-detection#overview
upvoted 2 times
...
K1SMM
8 months, 1 week ago
A is correct https://cloud.google.com/security-command-center/docs/concepts-vm-threat-detection-overview?hl=pt-br#how-cryptomining-detection-works
upvoted 1 times
...

Question 205

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 205 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 205
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are running applications outside Google Cloud that need access to Google Cloud resources. You are using workload identity federation to grant external identities Identity and Access Management (IAM) roles to eliminate the maintenance and security burden associated with service account keys. You must protect against attempts to spoof another user's identity and gain unauthorized access to Google Cloud resources.

What should you do? (Choose two.)

  • A. Enable data access logs for IAM APIs.
  • B. Limit the number of external identities that can impersonate a service account.
  • C. Use a dedicated project to manage workload identity pools and providers.
  • D. Use immutable attributes in attribute mappings.
  • E. Limit the resources that a service account can access.
Show Suggested Answer Hide Answer
Suggested Answer: CD 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Xoxoo
Highly Voted 1 year ago
Selected Answer: CD
Best practices for protecting against spoofing threats: Use a dedicated project to manage workload identity pools and providers. Use organizational policy constraints to disable the creation of workload identity pool providers in other projects. Use a single provider per workload identity pool to avoid subject collisions. Avoid federating with the same identity provider twice. Protect the OIDC metadata endpoint of your identity provider. Use the URL of the workload identity pool provider as audience. Use immutable attributes in attribute mappings. Use non-reusable attributes in attribute mappings. Don't allow attribute mappings to be modified. Don't rely on attributes that aren't stable or authoritative. Therefore, Option C and D are correct
upvoted 7 times
Nachtwaker
7 months ago
Agree, See https://cloud.google.com/iam/docs/best-practices-for-using-workload-identity-federation#protecting_against_spoofing_threats Because CD is in the list and E is not, preferred CD
upvoted 1 times
...
...
desertlotus1211
Most Recent 8 months, 1 week ago
D,E is correct Immutable attributes in the attribute mappings ensure that the identity information provided by the external identity provider cannot be easily altered. T By applying the principle of least privilege, limiting the resources a service account can access ensures that even if an external identity is compromised or misconfigured, the potential impact is minimized.
upvoted 1 times
...
cyberpunk21
1 year, 1 month ago
Selected Answer: CD
CD looks good to me
upvoted 1 times
...
anshad666
1 year, 1 month ago
Selected Answer: CD
https://cloud.google.com/iam/docs/best-practices-for-using-workload-identity-federation#protecting_against_spoofing_threats
upvoted 1 times
...
alkaloid
1 year, 2 months ago
Selected Answer: CD
https://cloud.google.com/iam/docs/best-practices-for-using-workload-identity-federation
upvoted 1 times
...
pfilourenco
1 year, 2 months ago
Selected Answer: CD
C & D - https://cloud.google.com/iam/docs/best-practices-for-using-workload-identity-federation#protecting_against_spoofing_threats
upvoted 2 times
...

Question 206

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 206 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 206
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You manage a BigQuery analytical data warehouse in your organization. You want to keep data for all your customers in a common table while you also restrict query access based on rows and columns permissions. Non-query operations should not be supported.

What should you do? (Choose two.)

  • A. Create row-level access policies to restrict the result data when you run queries with the filter expression set to TRUE.
  • B. Configure column-level encryption by using Authenticated Encryption with Associated Data (AEAD) functions with Cloud Key Management Service (KMS) to control access to columns at query runtime.
  • C. Create row-level access policies to restrict the result data when you run queries with the filter expression set to FALSE.
  • D. Configure dynamic data masking rules to control access to columns at query runtime.
  • E. Create column-level policy tags to control access to columns at query runtime.
Show Suggested Answer Hide Answer
Suggested Answer: CE 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
pfilourenco
Highly Voted 8 months, 1 week ago
Selected Answer: CE
C - Non-query operations should >>>not<<< be supported so it has to be FALSE: https://cloud.google.com/bigquery/docs/using-row-level-security-with-features#the_true_filter E - https://cloud.google.com/bigquery/docs/column-level-security-intro#column-level_security_workflow
upvoted 6 times
...
Xoxoo
Most Recent 6 months, 3 weeks ago
Selected Answer: CE
Bumping this up (credit to pfilourenco): C - Non-query operations should >>>not<<< be supported so it has to be FALSE: https://cloud.google.com/bigquery/docs/using-row-level-security-with-features#the_true_filter E - https://cloud.google.com/bigquery/docs/column-level-security-intro#column-level_security_workflow
upvoted 2 times
...
pradoUA
7 months ago
Selected Answer: CE
CE is ok
upvoted 1 times
...
cyberpunk21
7 months, 3 weeks ago
Selected Answer: CE
CE looks good
upvoted 2 times
...
pfilourenco
8 months, 1 week ago
Selected Answer: AE
A - https://cloud.google.com/bigquery/docs/using-row-level-security-with-features#the_true_filter E - https://cloud.google.com/bigquery/docs/column-level-security-intro#column-level_security_workflow
upvoted 1 times
gcp4test
8 months, 1 week ago
Non-query operations should >>>not<<< be supported so it has to be FALSE Correct CE
upvoted 1 times
pfilourenco
8 months, 1 week ago
Yes, you are correct again :)
upvoted 2 times
...
...
...
K1SMM
8 months, 1 week ago
CD is correct !
upvoted 1 times
...

Question 207

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 207 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 207
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your DevOps team uses Packer to build Compute Engine images by using this process:

1. Create an ephemeral Compute Engine VM.
2. Copy a binary from a Cloud Storage bucket to the VM's file system.
3. Update the VM's package manager.
4. Install external packages from the internet onto the VM.

Your security team just enabled the organizational policy, constraints/ compute.vmExternalIpAccess, to restrict the usage of public IP Addresses on VMs. In response, your DevOps team updated their scripts to remove public IP addresses on the Compute Engine VMs; however, the build pipeline is failing due to connectivity issues.

What should you do? (Choose two.)

  • A. Provision an HTTP load balancer with the VM in an unmanaged instance group to allow inbound connections from the internet to your VM.
  • B. Provision a Cloud NAT instance in the same VPC and region as the Compute Engine VM.
  • C. Enable Private Google Access on the subnet that the Compute Engine VM is deployed within.
  • D. Update the VPC routes to allow traffic to and from the internet.
  • E. Provision a Cloud VPN tunnel in the same VPC and region as the Compute Engine VM.
Show Suggested Answer Hide Answer
Suggested Answer: BC 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Xoxoo
6 months, 3 weeks ago
Selected Answer: BC
Provision a Cloud NAT instance (Option B): Cloud NAT allows your Compute Engine instances without public IP addresses to access the internet while preserving the security restrictions imposed by your organizational policy. By provisioning a Cloud NAT instance in the same VPC and region as your Compute Engine VMs, you enable outbound connectivity for these VMs. Enable Private Google Access (Option C): Enabling Private Google Access on the subnet where your Compute Engine VMs are deployed allows these instances to access Google Cloud services over the private IP address range. This can help with accessing external resources needed during the Packer image build process without exposing the VMs to the public internet.
upvoted 1 times
Xoxoo
6 months, 3 weeks ago
Options A, D, and E are not the most suitable solutions in this context: A. Provisioning an HTTP load balancer with an unmanaged instance group would allow inbound connections from the internet, which is the opposite of what you want to achieve (restricting public IP addresses). D. Updating VPC routes to allow traffic to and from the internet would also contradict the goal of restricting public IP addresses. E. Provisioning a Cloud VPN tunnel is used for connecting on-premises networks to Google Cloud or for secure communication between different VPCs but is not necessary for addressing the issue of restricted public IP addresses for Packer image builds. In summary, the most appropriate actions to address the connectivity issue while adhering to the policy constraint are options B and C. These solutions ensure that your Compute Engine VMs can access external resources and Google Cloud services without public IP addresses.
upvoted 1 times
...
...
anshad666
7 months, 2 weeks ago
Selected Answer: BC
B- Cloud Nat for external connections C- Cloud Storage private access from VM
upvoted 3 times
...
cyberpunk21
7 months, 3 weeks ago
Selected Answer: BC
BC looks good
upvoted 2 times
...
pfilourenco
8 months, 1 week ago
Selected Answer: BC
B & C make sense
upvoted 3 times
...
K1SMM
8 months, 1 week ago
BC I think Cloud NAT to update em private access to cloud storage access
upvoted 1 times
...

Question 208

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 208 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 208
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization recently activated the Security Command Center (SCC) standard tier. There are a few Cloud Storage buckets that were accidentally made accessible to the public. You need to investigate the impact of the incident and remediate it.

What should you do?

  • A. 1. Remove the Identity and Access Management (IAM) granting access to all Users from the buckets.
    2. Apply the organization policy storage.uniformBucketLevelAccess to prevent regressions.
    3. Query the data access logs to report on unauthorized access.
  • B. 1. Change permissions to limit access for authorized users.
    2. Enforce a VPC Service Controls perimeter around all the production projects to immediately stop any unauthorized access.
    3. Review the administrator activity audit logs to report on any unauthorized access.
  • C. 1. Change the bucket permissions to limit access.
    2. Query the bucket's usage logs to report on unauthorized access to the data.
    3. Enforce the organization policy storage.publicAccessPrevention to avoid regressions.
  • D. 1. Change bucket permissions to limit access.
    2. Query the data access audit logs for any unauthorized access to the buckets.
    3. After the misconfiguration is corrected, mute the finding in the Security Command Center.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Xoxoo
6 months, 3 weeks ago
Selected Answer: C
Here's why option C is the most appropriate choice: Change Bucket Permissions to Limit Access: The first step is to immediately change the bucket permissions to limit access and revoke public access. This is crucial for preventing further unauthorized access to the data stored in the Cloud Storage buckets. Query Bucket's Usage Logs: Querying the bucket's usage logs allows you to investigate the impact of the incident by identifying any unauthorized access or suspicious activity. You can use these logs to assess the extent of the breach and gather information about which objects or data were accessed. Enforce storage.publicAccessPrevention: To prevent similar incidents from happening in the future, you should enforce the organization policy storage.publicAccessPrevention. This policy helps ensure that public access is prevented at the organizational level, reducing the risk of accidental misconfigurations.
upvoted 4 times
Xoxoo
6 months, 3 weeks ago
Option A is not as comprehensive because it doesn't include enforcing the organization policy to prevent regressions (storage.publicAccessPrevention). Option B suggests enforcing VPC Service Controls, which is a good practice for network-level security, but it may not be directly related to securing Cloud Storage buckets and investigating unauthorized access. Additionally, reviewing administrator activity audit logs is not as effective for investigating the impact on unauthorized data access as querying the bucket's usage logs. Option D is similar to Option C but does not include the proactive enforcement of storage.publicAccessPrevention to prevent future regressions. Enforcing this policy is essential to maintain security.
upvoted 2 times
...
...
anshad666
7 months, 2 weeks ago
Selected Answer: C
c -looks good
upvoted 1 times
...
akg001
8 months ago
Selected Answer: C
C - is correct
upvoted 2 times
...
pfilourenco
8 months, 1 week ago
Selected Answer: C
C - usage logs to track access that occurs because a resource has allUsers or allAuthenticatedUsers - https://cloud.google.com/storage/docs/access-logs#should-you-use and the constraint - https://cloud.google.com/storage/docs/org-policy-constraints#public-access-prevention
upvoted 4 times
...

Question 209

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 209 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 209
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization is transitioning to Google Cloud. You want to ensure that only trusted container images are deployed on Google Kubernetes Engine (GKE) clusters in a project. The containers must be deployed from a centrally managed Container Registry and signed by a trusted authority.

What should you do? (Choose two.)

  • A. Enable Container Threat Detection in the Security Command Center (SCC) for the project.
  • B. Configure the trusted image organization policy constraint for the project.
  • C. Create a custom organization policy constraint to enforce Binary Authorization for Google Kubernetes Engine (GKE).
  • D. Enable PodSecurity standards, and set them to Restricted.
  • E. Configure the Binary Authorization policy with respective attestations for the project.
Show Suggested Answer Hide Answer
Suggested Answer: CE 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
p981pa123
2 months, 3 weeks ago
Selected Answer: CE
The option B. Configure the trusted image organization policy constraint for the project is not directly applicable to Google Kubernetes Engine (GKE) in the way that Binary Authorization is. Instead, this option refers to configuring an organization policy that ensures that only trusted images are used across all services, but it doesn't directly enforce a signature or attestation policy for images in GKE clusters. This organization policy is more about restricting sources of images (e.g., only allowing images from specific container registries), but it doesn't directly involve GKE enforcement of trust policies.
upvoted 1 times
...
JohnDohertyDoe
3 months, 2 weeks ago
Selected Answer: CE
It cannot be B, because the trusted image policy does not support container images (it is used for Compute Engine images). Use the Trusted image feature to define an organization policy that allows principals to create persistent disks only from images in specific projects. https://cloud.google.com/compute/docs/images/restricting-image-access
upvoted 2 times
...
pfilourenco
10 months ago
Selected Answer: CE
It's C and E. A -> cannot be because it does not make sense for centrally managing images and validating signed images. B -> Cannot be, because that org policy only applies to Compute Disk images, not containers (https://cloud.google.com/resource-manager/docs/organization-policy/org-policy-constraints) C -> Correct,m because we can create custom org policy for GKE to enforce Binary Authorization for image atestation (https://cloud.google.com/kubernetes-engine/docs/how-to/custom-org-policies#enforce) D -> PodSecurity policies are not applicable for this use case E -> We need to configure Binary Authorization in order to setup attestations to only allow specific images to be deployed in the cluster (https://cloud.google.com/binary-authorization/docs/setting-up). So, it's C and E.
upvoted 4 times
...
Bettoxicity
1 year ago
Selected Answer: BE
BE are correct!
upvoted 2 times
...
desertlotus1211
1 year, 3 months ago
What is the 'trusted image organization policy constraint'? Where is it defined and found? Can someone provide it?
upvoted 1 times
oezgan
1 year ago
https://cloud.google.com/compute/docs/images/restricting-image-access "Enact an image access policy by setting a compute.trustedImageProjects constraint on your project, your folder, or your organization."
upvoted 1 times
...
...
Xoxoo
1 year, 6 months ago
Selected Answer: BE
To ensure that only trusted container images are deployed on Google Kubernetes Engine (GKE) clusters in a project and that the containers are deployed from a centrally managed Container Registry and signed by a trusted authority, you should consider the following options: Configure the trusted image organization policy constraint for the project (Option B): This will allow you to create an organization policy constraint that enforces the use of only trusted images from a specific Container Registry. You can specify the registry that must be used, ensuring that images are sourced only from that trusted location. Configure the Binary Authorization policy with respective attestations for the project (Option E): Binary Authorization for GKE allows you to create policies that enforce the use of only trusted container images. You can specify which images are trusted and require attestation from trusted authorities before deployment. This ensures that only signed and trusted images can be deployed on the GKE clusters in the project.
upvoted 4 times
Xoxoo
1 year, 6 months ago
Options A, C, and D are not directly related to ensuring the use of trusted container images from a centrally managed Container Registry and signed by a trusted authority: A. Enabling Container Threat Detection in Security Command Center (SCC) helps with threat detection but does not directly enforce the use of trusted container images. C. Creating a custom organization policy constraint for Binary Authorization is redundant and unnecessary when Binary Authorization can be configured directly (Option E). D. Enabling PodSecurity standards to a "Restricted" level enforces certain security policies on pods but does not directly address the issue of ensuring trusted container images.
upvoted 2 times
...
...
pradoUA
1 year, 6 months ago
Selected Answer: BE
BE are correct
upvoted 1 times
...
ArizonaClassics
1 year, 7 months ago
To ensure that only trusted container images are deployed on Google Kubernetes Engine (GKE) clusters in a project and that these containers are deployed from a centrally managed Container Registry and signed by a trusted authority, you should consider the following two actions: B. Configure the trusted image organization policy constraint for the project. Trusted image sources can be specified at the project level using organization policy constraints. This ensures that only images from trusted Container Registries can be deployed. E. Configure the Binary Authorization policy with respective attestations for the project. Binary Authorization allows you to specify a policy that will require images to be signed by trusted authorities before they can be deployed. You can configure this with attestations to indicate that certain steps, like vulnerability scanning and code reviews, have been completed.
upvoted 1 times
...
cyberpunk21
1 year, 7 months ago
Selected Answer: BE
B. This policy ensures that only trusted images from specific Container Registry repositories can be deployed. This meets one of the requirements E. Binary Authorization ensures that only container images that are signed by trusted authorities can be deployed on GKE. Attestations are a component of this, as they provide a verifiable signature by trusted parties that an image meets certain criteria.
upvoted 2 times
...
arpgaur
1 year, 7 months ago
B and E. This will create a policy that enforces Binary Authorization and specifies that only images from the centrally managed Container Registry can be deployed. C and E. This will create a policy that enforces Binary Authorization and specifies that only images that are signed by a trusted authority can be deployed. However, it does not specify the source of the images.
upvoted 1 times
...
STomar
1 year, 8 months ago
Correct Answer: BE B: Configure the trusted image organization policy constraint for the project. E: Configure the Binary Authorization policy with respective attestations for the project.
upvoted 1 times
...
akg001
1 year, 8 months ago
Selected Answer: CE
C and E
upvoted 2 times
...
Mithung30
1 year, 8 months ago
Selected Answer: CE
CE is correct
upvoted 2 times
...
K1SMM
1 year, 8 months ago
BC is correct answer
upvoted 2 times
gcp4test
1 year, 8 months ago
B is for Compute Engine images. I think it is CE C - custom constraints for Binary Auth on GKE -OK E - We provide in Binary Auth rule Container Registry from where, we can deploy images
upvoted 3 times
cyberpunk21
1 year, 7 months ago
it's an org policy constraint it applies to all kings of images
upvoted 1 times
...
...
...

Question 210

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 210 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 210
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your company uses Google Cloud and has publicly exposed network assets. You want to discover the assets and perform a security audit on these assets by using a software tool in the least amount of time.

What should you do?

  • A. Run a platform security scanner on all instances in the organization.
  • B. Identify all external assets by using Cloud Asset Inventory, and then run a network security scanner against them.
  • C. Contact a Google approved security vendor to perform the audit.
  • D. Notify Google about the pending audit, and wait for confirmation before performing the scan.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Bettoxicity
6 months, 1 week ago
Selected Answer: B
B is correct!
upvoted 1 times
...
Xoxoo
1 year ago
Selected Answer: B
The most efficient approach to discover publicly exposed network assets and perform a security audit on them in the least amount of time is: B. Identify all external assets by using Cloud Asset Inventory, and then run a network security scanner against them. Here's why Option B is the recommended choice: Cloud Asset Inventory: Using Cloud Asset Inventory allows you to quickly identify all the external assets and resources in your Google Cloud environment. This includes information about your projects, instances, storage buckets, and more. This step is crucial for understanding the scope of your audit. Network Security Scanner: Once you have identified the external assets, you can run a network security scanner to assess the security of these assets. Network security scanners can help identify vulnerabilities and potential security risks quickly.
upvoted 1 times
Xoxoo
1 year ago
Option A (Running a platform security scanner on all instances) might be time-consuming, especially if you have a large number of instances, and it doesn't address other types of publicly exposed assets besides instances. Option C (Contacting a Google-approved security vendor) is a valid option, but it may introduce delays as you wait for the vendor's availability. It's also likely to involve additional costs. Option D (Notifying Google about the pending audit) is not a typical step for performing a security audit on your own network assets. It's more applicable if you're engaging with Google for a security review or penetration testing but not for a self-initiated audit.
upvoted 1 times
...
...
cyberpunk21
1 year, 1 month ago
Selected Answer: B
B. Identify all external assets by using Cloud Asset Inventory, and then run a network security scanner against them. Cloud Asset Inventory allows you to see all of your Google Cloud assets. By using it, you can quickly identify which assets are externally accessible. Once identified, you can then run a specialized network security scanner against only these assets, making the process efficient. C. Contact a Google approved security vendor to perform the audit. While using an external vendor can be beneficial for thoroughness, it may not meet the criteria of accomplishing the task in the "least amount of time."
upvoted 2 times
...
anshad666
1 year, 1 month ago
Selected Answer: B
Should be B
upvoted 1 times
...
pfilourenco
1 year, 2 months ago
Selected Answer: B
B is the correct.
upvoted 3 times
...

Question 211

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 211 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 211
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization wants to be compliant with the General Data Protection Regulation (GDPR) on Google Cloud. You must implement data residency and operational sovereignty in the EU.

What should you do? (Choose two.)

  • A. Limit the physical location of a new resource with the Organization Policy Service "resource locations constraint."
  • B. Use Cloud IDS to get east-west and north-south traffic visibility in the EU to monitor intra-VPC and inter-VPC communication.
  • C. Limit Google personnel access based on predefined attributes such as their citizenship or geographic location by using Key Access Justifications.
  • D. Use identity federation to limit access to Google Cloud resources from non-EU entities.
  • E. Use VPC Flow Logs to monitor intra-VPC and inter-VPC traffic in the EU.
Show Suggested Answer Hide Answer
Suggested Answer: AC 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Andrei_Z
Highly Voted 1 year, 1 month ago
Selected Answer: AC
Just implemented this last month at work
upvoted 9 times
...
Potatoe2023
Most Recent 5 months, 2 weeks ago
Selected Answer: AC
A & C https://cloud.google.com/assured-workloads/key-access-justifications/docs/assured-workloads
upvoted 1 times
...
Bettoxicity
6 months, 1 week ago
Selected Answer: AD
D: Identity federation allows you to integrate your existing identity provider (IdP) with Google Cloud. This enables users to access Google Cloud resources using their existing credentials from the IdP, ideally located within the EU. By configuring access controls within your IdP, you can restrict access to Google Cloud resources from non-EU entities. Why not C?: Doesn't address data location. Doesn't restrict access from non-EU entities. Isn't a data residency measure. Isn't an operational sovereignty measure.
upvoted 2 times
...
ArizonaClassics
1 year, 1 month ago
To be compliant with GDPR on Google Cloud and implement data residency and operational sovereignty in the EU, you can take the following two actions: A. Limit the physical location of a new resource with the Organization Policy Service "resource locations constraint." This will restrict the locations where resources in your Google Cloud organization can be deployed. You can configure this to only allow EU locations, ensuring that data remains within the EU. C. Limit Google personnel access based on predefined attributes such as their citizenship or geographic location by using Key Access Justifications. This can help you enforce operational sovereignty by controlling who has access to your data. Key Access Justifications can help you restrict Google personnel access based on certain attributes like geographic location, ensuring that only personnel based in the EU can access the data.
upvoted 1 times
...
ArizonaClassics
1 year, 1 month ago
So o, for GDPR compliance focusing on data residency and operational sovereignty in the EU, options A and C are the most relevant.
upvoted 1 times
...
GCBC
1 year, 1 month ago
The correct answers are A and C. A. Limit the physical location of a new resource with the Organization Policy Service "resource locations constraint." This will ensure that all new resources are created in the EU, which is required for data residency compliance with GDPR. C. Limit Google personnel access based on predefined attributes such as their citizenship or geographic location by using Key Access Justifications. This will help to ensure that only Google personnel who are authorized to access EU data are able to do so.
upvoted 1 times
...
cyberpunk21
1 year, 1 month ago
Selected Answer: AC
D is also correct if we're talking in a much bigger scope like using External IDP
upvoted 1 times
...
ITIFR78
1 year, 1 month ago
Selected Answer: AC
A & C - https://cloud.google.com/architecture/framework/security/data-residency-sovereignty#manage_your_operational_sovereignty
upvoted 1 times
...
pfilourenco
1 year, 2 months ago
Selected Answer: AC
A & C - https://cloud.google.com/architecture/framework/security/data-residency-sovereignty#manage_your_operational_sovereignty
upvoted 4 times
arpgaur
1 year, 1 month ago
C is incorrect. Key Access Justifications can be used to limit access to specific keys, but they do not prevent Google personnel from accessing other data in your Google Cloud environment. A and D are the right answers, imo
upvoted 1 times
...
...

Question 212

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 212 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 212
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your company is moving to Google Cloud. You plan to sync your users first by using Google Cloud Directory Sync (GCDS). Some employees have already created Google Cloud accounts by using their company email addresses that were created outside of GCDS. You must create your users on Cloud Identity.

What should you do?

  • A. Configure GCDS and use GCDS search rules to sync these users.
  • B. Use the transfer tool to migrate unmanaged users.
  • C. Write a custom script to identify existing Google Cloud users and call the Admin SDK: Directory API to transfer their account.
  • D. Configure GCDS and use GCDS exclusion rules to ensure users are not suspended.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
anshad666
7 months, 3 weeks ago
Selected Answer: B
B only
upvoted 2 times
...
cyberpunk21
7 months, 3 weeks ago
Selected Answer: C
Using the Directory API, you can programmatically manage user accounts, which includes creating new ones. This would let you create users in Cloud Identity and handle ones that already have accounts.
upvoted 1 times
cyberpunk21
7 months, 3 weeks ago
If you already have an account created using your company's email (an unmanaged account) and your company now wants to establish a managed domain and create accounts for its employees, including you then option B is the answer
upvoted 1 times
...
cyberpunk21
7 months, 3 weeks ago
Because in the question they mentioned you must create your users on cloud identity
upvoted 1 times
...
...
ITIFR78
7 months, 3 weeks ago
Selected Answer: B
standart answer
upvoted 3 times
...
Simon6666
7 months, 4 weeks ago
B https://support.google.com/a/answer/7177267?sjid=1548376628970849998-AP
upvoted 2 times
...
pfilourenco
8 months, 1 week ago
Selected Answer: B
B is the correct - https://support.google.com/a/answer/6178640?hl=en&ref_topic=7042002&sjid=4882239396686183653-EU
upvoted 2 times
...
K1SMM
8 months, 1 week ago
B of course
upvoted 1 times
...

Question 213

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 213 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 213
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization is using GitHub Actions as a continuous integration and delivery (CI/CD) platform. You must enable access to Google Cloud resources from the CI/CD pipelines in the most secure way.

What should you do?

  • A. Create a service account key, and add it to the GitHub pipeline configuration file.
  • B. Create a service account key, and add it to the GitHub repository content.
  • C. Configure a Google Kubernetes Engine cluster that uses Workload Identity to supply credentials to GitHub.
  • D. Configure workload identity federation to use GitHub as an identity pool provider.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Pime13
4 months ago
Selected Answer: D
https://cloud.google.com/blog/products/identity-security/enabling-keyless-authentication-from-github-actions
upvoted 1 times
...
ArizonaClassics
7 months, 1 week ago
The most secure way to enable access to Google Cloud resources from CI/CD pipelines using GitHub Actions is: D. Configure workload identity federation to use GitHub as an identity pool provider. Workload Identity Federation allows you to configure Google Cloud to trust external identity providers. In this case, GitHub Actions can be set up as an identity pool provider, so you can federate identities between GitHub and Google Cloud. This eliminates the need to create and manage service account keys, which is generally considered less secure and requires more operational overhead like key rotation. With workload identity federation, the process is more secure and streamlined.
upvoted 1 times
...
cyberpunk21
7 months, 3 weeks ago
Selected Answer: D
D is correct
upvoted 2 times
...
Mithung30
8 months, 1 week ago
Selected Answer: D
D is correct. https://cloud.google.com/blog/products/identity-security/enabling-keyless-authentication-from-github-actions
upvoted 2 times
...
pfilourenco
8 months, 1 week ago
Selected Answer: D
D is the correct.
upvoted 3 times
...

Question 214

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 214 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 214
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization processes sensitive health information. You want to ensure that data is encrypted while in use by the virtual machines (VMs). You must create a policy that is enforced across the entire organization.

What should you do?

  • A. Implement an organization policy that ensures that all VM resources created across your organization use customer-managed encryption keys (CMEK) protection.
  • B. Implement an organization policy that ensures all VM resources created across your organization are Confidential VM instances.
  • C. Implement an organization policy that ensures that all VM resources created across your organization use Cloud External Key Manager (EKM) protection.
  • D. No action is necessary because Google encrypts data while it is in use by default.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
ArizonaClassics
7 months, 1 week ago
If your organization processes sensitive health information and you want to ensure that data is encrypted while in use by the virtual machines (VMs), the appropriate action would be: B. Implement an organization policy that ensures all VM resources created across your organization are Confidential VM instances. Confidential VMs offer memory encryption to secure data while it is "in use". They use AMD's Secure Encrypted Virtualization (SEV) feature to ensure that data remains encrypted when processed. This would help to meet the requirement of encrypting sensitive health information at rest in transit and while in use by the VMs.
upvoted 3 times
...
akg001
8 months ago
Selected Answer: B
B- is correct
upvoted 2 times
...
alkaloid
8 months, 1 week ago
Selected Answer: B
B is correct: https://www.youtube.com/watch?v=cAEGCE1vNh4&t=22s
upvoted 4 times
...
pfilourenco
8 months, 1 week ago
Selected Answer: B
B - Confidential VM is a type of Compute Engine VM that ensures that your data and applications stay private and encrypted even while in use. + By enabling Confidential Computing organization policy constraint, you can ensure that all VM resources created across your organization are Confidential VM instances.
upvoted 1 times
...

Question 215

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 215 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 215
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are a Cloud Identity administrator for your organization. In your Google Cloud environment, groups are used to manage user permissions. Each application team has a dedicated group. Your team is responsible for creating these groups and the application teams can manage the team members on their own through the Google Cloud console. You must ensure that the application teams can only add users from within your organization to their groups.

What should you do?

  • A. Change the configuration of the relevant groups in the Google Workspace Admin console to prevent external users from being added to the group.
  • B. Set an Identity and Access Management (IAM) policy that includes a condition that restricts group membership to user principals that belong to your organization.
  • C. Define an Identity and Access Management (IAM) deny policy that denies the assignment of principals that are outside your organization to the groups in scope.
  • D. Export the Cloud Identity logs to BigQuery. Configure an alert for external members added to groups. Have the alert trigger a Cloud Function instance that removes the external members from the group.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Portugapt
Highly Voted 1 year, 2 months ago
Selected Answer: A
1) https://cloud.google.com/resource-manager/docs/organization-policy/restricting-domains#google_groups 2) https://cloud.google.com/resource-manager/docs/organization-policy/restricting-domains#forcing_access Alternatively, you can grant access to a Google group that contains the relevant service accounts: Create a Google group within the allowed domain. Use the Google Workspace administrator panel to turn off domain restriction for that group. Add the service account to the group. Grant access to the Google group in the IAM policy. 3) https://support.google.com/a/answer/167097 --- You can granularily enforce this requirement on a group. No need for company wide. This is also done in the Google Workspace Admin console. My bet is on A.
upvoted 6 times
Portugapt
1 year, 2 months ago
Organization wide*
upvoted 1 times
...
...
MoAk
Most Recent 4 months, 3 weeks ago
Selected Answer: A
The objective of the Q is asking you as the CI admin to ensure that project admins cannot add members from outside of your organisation. The fine grained control of said member can be controlled later via IAM. Again, the objective is for us to ensure we do not allow the project admins to do this and so this can only be achieved by Answer A. https://cloud.google.com/resource-manager/docs/organization-policy/restricting-domains#google_groups
upvoted 1 times
...
Sundar_Pichai
7 months, 2 weeks ago
Selected Answer: A
I'll go with A, Google IAM conditions allow you to set fine-grained access controls on resources. However, these conditions focus on: Resource type Request time The identity making the request The source IP address The device or network conditions In other words, It is not possible to directly write a Google IAM policy that restricts group membership to within the company domain. Google IAM policies are used to manage access to resources, but they do not control the membership of Google Groups.
upvoted 1 times
...
3d9563b
8 months, 3 weeks ago
Selected Answer: A
By configuring the relevant groups in the Google Workspace Admin console to restrict membership to internal users, you implement a direct and preventive measure that aligns well with the requirement to manage permissions through groups securely.
upvoted 1 times
...
winston9
1 year, 2 months ago
Selected Answer: B
B is correct here
upvoted 3 times
...
Xoxoo
1 year, 6 months ago
Selected Answer: B
To ensure that application teams can only add users from within your organization to their groups, you should use option B: B. Set an Identity and Access Management (IAM) policy that includes a condition that restricts group membership to user principals that belong to your organization. Here's why option B is the recommended choice: 1) IAM Policy with Conditions: You can define an IAM policy for the groups that includes a condition specifying that only user principals belonging to your organization can be added as members. This condition enforces the requirement that only users within your organization can be added to the groups.
upvoted 2 times
Xoxoo
1 year, 6 months ago
Option A, which suggests changing the configuration in the Google Workspace Admin console, typically doesn't provide fine-grained control over group membership based on organization membership. Option C is also not recommended because it defines an IAM deny policy that denies the assignment of principals outside your organization to the groups in scope. This approach can be complex and difficult to manage, especially if you have a large number of groups Option D, "Export the Cloud Identity logs to BigQuery," and configuring an alert and Cloud Function to remove external members, is a more reactive approach and may not prevent external members from being added in the first place.
upvoted 1 times
...
...
desertlotus1211
1 year, 7 months ago
The question is not asking about Workspace Item. It's application teams need to add member to a group within the organization, not external. So how does this relate to Workspace?
upvoted 2 times
...
ananta93
1 year, 7 months ago
Selected Answer: A
Answer is A. Change the configuration of the relevant groups in the Google Workspace Admin console to prevent external users from being added to the group.
upvoted 2 times
...
ArizonaClassics
1 year, 7 months ago
The goal is to ensure that only users from within your organization can be added to specific Google Cloud groups managed by application teams. Here are some considerations for each option: A. Change the configuration of the relevant groups in the Google Workspace Admin console to prevent external users from being added to the group. If you are using Google Workspace (or Google Workspace for Education), you have the option to prevent external members from being added to a group directly through the Admin console. This is a straightforward way to enforce the policy and doesn't require extra monitoring or automation.
upvoted 1 times
...
ArizonaClassics
1 year, 7 months ago
The most direct and effective way to ensure that only users from within your organization can be added to the Google Cloud groups is: A. Change the configuration of the relevant groups in the Google Workspace Admin console to prevent external users from being added to the group. In Google Workspace Admin Console, you have the option to configure groups such that only users from within your organization can be added. This doesn't require you to rely on reactive measures like monitoring and alerts or to rely on IAM policies, which could be more complex to manage for this specific requirement. You can directly specify who can be a member of these groups by altering their settings in the Admin Console
upvoted 1 times
...
GCBC
1 year, 7 months ago
The correct answer is B. Set an Identity and Access Management (IAM) policy that includes a condition that restricts group membership to user principals that belong to your organization. An IAM policy is a set of permissions that you can attach to a Google Cloud resource, such as a group. The policy defines who can access the resource and what actions they can perform. In this case, you can create an IAM policy that restricts group membership to user principals that belong to your organization. This will prevent the application teams from adding users from outside your organization to their groups. This condition will restrict the policy to users who belong to your organization's domain. Once you have created the policy, you can attach it to the groups that you want to protect. To do this, go to the Groups page in the Google Cloud console and select the groups that you want to protect. Then, click Edit and select the policy that you created.
upvoted 3 times
...
anshad666
1 year, 7 months ago
Selected Answer: A
https://support.google.com/a/answer/167097?hl=en&sjid=9952232817978914605-AP
upvoted 3 times
...
Kush92me
1 year, 7 months ago
A is correct, anyone who has access to google admin portal can check.
upvoted 2 times
...
cyberpunk21
1 year, 7 months ago
Selected Answer: A
A is correct
upvoted 1 times
...
cyberpunk21
1 year, 7 months ago
Selected Answer: C
C is correct
upvoted 1 times
...
anshad666
1 year, 7 months ago
Selected Answer: C
https://support.google.com/a/answer/167097?hl=en&sjid=9952232817978914605-AP
upvoted 2 times
anshad666
1 year, 7 months ago
There is a typo, it should A
upvoted 1 times
...
...
gcp4test
1 year, 8 months ago
Selected Answer: A
A - group can be configured to prevent adding external members.
upvoted 2 times
...

Question 216

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 216 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 216
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization wants to be continuously evaluated against CIS Google Cloud Computing Foundations Benchmark v1.3.0 (CIS Google Cloud Foundation 1.3). Some of the controls are irrelevant to your organization and must be disregarded in evaluation. You need to create an automated system or process to ensure that only the relevant controls are evaluated.

What should you do?

  • A. Mark all security findings that are irrelevant with a tag and a value that indicates a security exception. Select all marked findings, and mute them on the console every time they appear. Activate Security Command Center (SCC) Premium.
  • B. Activate Security Command Center (SCC) Premium. Create a rule to mute the security findings in SCC so they are not evaluated.
  • C. Download all findings from Security Command Center (SCC) to a CSV file. Mark the findings that are part of CIS Google Cloud Foundation 1.3 in the file. Ignore the entries that are irrelevant and out of scope for the company.
  • D. Ask an external audit company to provide independent reports including needed CIS benchmarks. In the scope of the audit, clarify that some of the controls are not needed and must be disregarded.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Xoxoo
6 months, 3 weeks ago
Selected Answer: B
Option A is a reasonable approach, but it involves ongoing manual intervention to mute security findings and may not be the most efficient method, especially when dealing with a large number of findings. Option B, activating Security Command Center (SCC) Premium and creating rules to mute security findings, is a more automated and scalable approach. SCC Premium allows you to create custom security rules to automatically filter or mute findings based on your organization's requirements. This can help reduce the noise and ensure that irrelevant findings are not evaluated.
upvoted 2 times
Xoxoo
6 months, 3 weeks ago
Answer: B
upvoted 2 times
...
...
ArizonaClassics
7 months, 1 week ago
The right answer is B. please disregard the former
upvoted 1 times
...
ArizonaClassics
7 months, 1 week ago
A. Mark all security findings that are irrelevant with a tag and a value that indicates a security exception. Select all marked findings, and mute them on the console every time they appear. Activate Security Command Center (SCC) Premium. This option might require manual intervention to tag and mute findings every time they appear. This can be labor-intensive and prone to error, thus not ideal for an automated, ongoing evaluation.
upvoted 1 times
...
cyberpunk21
7 months, 3 weeks ago
Selected Answer: B
using Rules, we can automate this.
upvoted 2 times
...
anshad666
7 months, 3 weeks ago
Selected Answer: B
https://cloud.google.com/security-command-center/docs/how-to-mute-findings
upvoted 2 times
...
pfilourenco
8 months, 1 week ago
Selected Answer: B
B - Create a rule to mute!
upvoted 2 times
gcp4test
8 months, 1 week ago
yes rules
upvoted 1 times
...
...

Question 217

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 217 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 217
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are routing all your internet facing traffic from Google Cloud through your on-premises internet connection. You want to accomplish this goal securely and with the highest bandwidth possible.

What should you do?

  • A. Create an HA VPN connection to Google Cloud. Replace the default 0.0.0.0/0 route.
  • B. Create a routing VM in Compute Engine. Configure the default route with the VM as the next hop.
  • C. Configure Cloud Interconnect with HA VPN. Replace the default 0.0.0.0/0 route to an on-premises destination.
  • D. Configure Cloud Interconnect and route traffic through an on-premises firewall.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
desertlotus1211
8 months, 1 week ago
I'm going to take back my answer - the Answer should be 'D'.... The Internet traffic from GCP is hair-pining through an Internet connection on-premise, which mean the on-premise has two (2) separate connections; to GCP and to the Internet.... So 'D' make more sense
upvoted 1 times
...
desertlotus1211
8 months, 1 week ago
The question states ' on-premise Internet connection'.... a Dedicated Interconnect IS NOT an Internet connection. Therefore C & D cannot be the correct choice - that leaves 'A'
upvoted 1 times
...
Xoxoo
1 year ago
Selected Answer: D
Here's why option D is the recommended choice: Cloud Interconnect: Google Cloud Interconnect is designed to provide dedicated and high-bandwidth connections between your on-premises network and Google Cloud. It offers higher bandwidth and lower latency compared to typical VPN connections. On-Premises Firewall: By configuring Cloud Interconnect to route traffic through an on-premises firewall, you can ensure that all traffic between Google Cloud and the internet passes through your organization's firewall for security inspection and enforcement of security policies.
upvoted 2 times
Xoxoo
1 year ago
Option A (Creating an HA VPN connection) is suitable for setting up a VPN connection but may not provide the same high bandwidth as Cloud Interconnect. Additionally, replacing the default 0.0.0.0/0 route with an on-premises destination might not be necessary if you want to route all traffic through your on-premises internet connection. Option B (Creating a routing VM in Compute Engine) can be used for routing, but it may introduce additional complexity and potential single points of failure. Option C (Configuring Cloud Interconnect with HA VPN) combines two connectivity methods but may not be necessary if you only want to route traffic through your on-premises internet connection and not through a VPN.
upvoted 1 times
...
...
ArizonaClassics
1 year, 1 month ago
If your objective is to securely route all internet-facing traffic from Google Cloud through your on-premises internet connection with the highest bandwidth possible, you should go for: D. Configure Cloud Interconnect and route traffic through an on-premises firewall. Reasons: Highest Bandwidth: Cloud Interconnect offers higher bandwidth compared to VPN solutions. Security: You're routing the traffic through an on-premises firewall, which gives you centralized control over security policies. Stability: Cloud Interconnect is a dedicated connection, making it more reliable compared to VPNs. Latency: Cloud Interconnect usually provides lower latency than HA VPN solutions, which is beneficial for performance.
upvoted 1 times
...
cyberpunk21
1 year, 1 month ago
Selected Answer: D
it's faster than other options
upvoted 1 times
...
gcp4test
1 year, 2 months ago
Selected Answer: D
Goal - securely and with the highest bandwidth possible, only Dedicated Interconnect
upvoted 3 times
gcp4test
1 year, 2 months ago
Might be C, there is also "security" requirments: https://cloud.google.com/network-connectivity/docs/interconnect/concepts/ha-vpn-interconnect
upvoted 4 times
akilaz
1 year, 1 month ago
"Each HA VPN tunnel can support up to 3 gigabits per second (Gbps) for the sum of ingress and egress traffic. This is a limitation of HA VPN." https://cloud.google.com/network-connectivity/docs/vpn/quotas#limits "An Interconnect connection is a logical connection to Google, made up of one or more physical circuits. You can request one of the following circuit choices: Up to 2 x 100 Gbps (200-Gbps) circuits." https://cloud.google.com/network-connectivity/docs/interconnect/quotas D imo
upvoted 1 times
...
...
...

Question 218

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 218 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 218
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization uses Google Workspace Enterprise Edition for authentication. You are concerned about employees leaving their laptops unattended for extended periods of time after authenticating into Google Cloud. You must prevent malicious people from using an employee's unattended laptop to modify their environment.

What should you do?

  • A. Create a policy that requires employees to not leave their sessions open for long durations.
  • B. Review and disable unnecessary Google Cloud APIs.
  • C. Require strong passwords and 2SV through a security token or Google authenticator.
  • D. Set the session length timeout for Google Cloud services to a shorter duration.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
MoAk
4 months, 3 weeks ago
Selected Answer: D
TBH, it's the only answer that makes sense to the Q being asked.
upvoted 1 times
...
shmoeee
1 year ago
"extended periods of time" is the key phrase here
upvoted 1 times
...
ArizonaClassics
1 year, 7 months ago
D cool
upvoted 1 times
...
cyberpunk21
1 year, 7 months ago
Selected Answer: D
D is good
upvoted 2 times
...
pfilourenco
1 year, 8 months ago
Selected Answer: D
D is the correct.
upvoted 3 times
gcp4test
1 year, 8 months ago
D shoud be fine
upvoted 1 times
...
...

Question 219

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 219 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 219
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are migrating an on-premises data warehouse to BigQuery, Cloud SQL, and Cloud Storage. You need to configure security services in the data warehouse. Your company compliance policies mandate that the data warehouse must:

• Protect data at rest with full lifecycle management on cryptographic keys.
• Implement a separate key management provider from data management.
• Provide visibility into all encryption key requests.

What services should be included in the data warehouse implementation? (Choose two.)

  • A. Customer-managed encryption keys
  • B. Customer-Supplied Encryption Keys
  • C. Key Access Justifications
  • D. Access Transparency and Approval
  • E. Cloud External Key Manager
Show Suggested Answer Hide Answer
Suggested Answer: CE 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
YourFriendlyNeighborhoodSpider
3 weeks, 3 days ago
Selected Answer: AE
AE looks correct, many people in the comments explained why, take a note.
upvoted 1 times
...
7f97f9f
1 month, 2 weeks ago
Selected Answer: AE
A. CMEK allows you to control the encryption keys used to protect your data at rest. You have full control over the key lifecycle. This is a crucial component. C. KAJ requires that Google support personnel provide a justification for accessing customer content. It does not provide visibility into all encryption key requests. E. Cloud EKM allows you to use encryption keys that are managed in an external key management system (KMS) that you control. This fulfills the requirement of separating key management from data management. This also provides visibility into key requests, as they are being requested from your external KMS. Therefore the answer is A. and E.
upvoted 2 times
...
p981pa123
2 months, 3 weeks ago
Selected Answer: AE
A and E
upvoted 1 times
...
BPzen
4 months, 2 weeks ago
Selected Answer: AE
Why Option A (Customer-Managed Encryption Keys) is Correct Control Over Keys: Customer-managed encryption keys (CMEK) allow you to manage the lifecycle of encryption keys, including rotation, revocation, and deletion, through Cloud Key Management Service (KMS). Integration with BigQuery, Cloud SQL, and Cloud Storage: CMEK is supported across BigQuery, Cloud Storage, and Cloud SQL, enabling encryption of data at rest with your managed keys. Compliance Support: CMEK satisfies the requirement to manage the full lifecycle of encryption keys.
upvoted 1 times
...
Bettoxicity
1 year ago
Selected Answer: AE
Why not C?: KAJ focuses on managing access control for Google personnel to resources, not specifically on encryption key visibility.
upvoted 1 times
...
Bettoxicity
1 year ago
Selected Answer: CE
Why not C?: KAJ focuses on managing access control for Google personnel to resources, not specifically on encryption key visibility.
upvoted 1 times
...
adb4007
1 year, 2 months ago
Selected Answer: CE
CE seems good for me. If you want to be compliance with "Implement a separate key management provider from data management" you must have 2 providers and "B" CSEK couldn't work i think. "E" work for the both first policies. "C" seems good for the third policy.
upvoted 2 times
...
ArizonaClassics
1 year, 7 months ago
C. Key Access Justifications Key Access Justifications can provide visibility into all encryption key requests, satisfying your third condition. This feature enables you to get justification for every request to use a decryption key, giving you the information you need to decide whether to approve or deny the request in real-time. E. Cloud External Key Manager The Cloud External Key Manager allows you to use and manage encryption keys stored outside of Google's infrastructure, thereby providing a separate key management provider from data management. This meets your first and second conditions because it enables you to fully manage the lifecycle of your cryptographic keys while storing them outside Google Cloud.
upvoted 4 times
...
cyberpunk21
1 year, 7 months ago
Selected Answer: CE
looks good to me
upvoted 2 times
...
anshad666
1 year, 7 months ago
Selected Answer: CE
C - https://cloud.google.com/assured-workloads/key-access-justifications/docs/overview E - https://cloud.google.com/kms/docs/ekm
upvoted 2 times
...
STomar
1 year, 8 months ago
AE: https://cloud.google.com/kms/docs/cmek A: CMEK gives you control over the keys that protect your data at rest in Google Cloud. Using CMEK gives you control over more aspects of the lifecycle and management of your keys.
upvoted 1 times
...
akg001
1 year, 8 months ago
Selected Answer: CE
C,E - looks correct to me
upvoted 3 times
...
Sanjana2020
1 year, 8 months ago
I think this is BE. They mention that they want the data and the keys to be in separate locations. So that would mean CSEK. And that is handled by External Key Manager. So BE.
upvoted 2 times
...
gcp4test
1 year, 8 months ago
Selected Answer: CE
Implement a separate key management provider from data management - so the key must be outside of the GCP - E Provide visibility into all encryption key requests. - this can be supported by - C
upvoted 4 times
...

Question 220

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 220 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 220
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You manage one of your organization's Google Cloud projects (Project A). A VPC Service Control (SC) perimeter is blocking API access requests to this project, including Pub/Sub. A resource running under a service account in another project (Project B) needs to collect messages from a Pub/Sub topic in your project. Project B is not included in a VPC SC perimeter. You need to provide access from Project B to the Pub/Sub topic in Project A using the principle of least privilege.

What should you do?

  • A. Configure an ingress policy for the perimeter in Project A, and allow access for the service account in Project B to collect messages.
  • B. Create an access level that allows a developer in Project B to subscribe to the Pub/Sub topic that is located in Project A.
  • C. Create a perimeter bridge between Project A and Project B to allow the required communication between both projects.
  • D. Remove the Pub/Sub API from the list of restricted services in the perimeter configuration for Project A.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
MoAk
4 months, 3 weeks ago
Selected Answer: A
The answer is Answer A. Why? Because Project B does not belong in a service perimeter itself. You cannot create a perimeter bridge without being part of a service perimeter. Answer is A. https://cloud.google.com/vpc-service-controls/docs/share-across-perimeters
upvoted 1 times
...
Sundar_Pichai
7 months, 2 weeks ago
Selected Answer: A
I spent some time going back and forth on this question. I believe the Answer is A. C can't be right because project B isn't part of another perimeter.
upvoted 2 times
...
jujanoso
9 months ago
Selected Answer: A
Principle of Least Privilege: By configuring an ingress policy, you can precisely define which specific service account from Project B is allowed to access the Pub/Sub topic in Project A. This approach ensures that only the necessary access is granted, aligning with the principle of least privilege.
upvoted 1 times
...
shanwford
11 months, 3 weeks ago
Selected Answer: A
Should be (A) according https://cloud.google.com/vpc-service-controls/docs/share-across-perimeters .A perimeter bridge works between projects in different service perimeters. So Project B is not in a perimeter, so bridge wil not work here.
upvoted 1 times
...
b6f53d8
1 year, 2 months ago
Selected Answer: B
https://cloud.google.com/vpc-service-controls/docs/use-access-levels#create_an_access_level
upvoted 1 times
Nachtwaker
1 year, 1 month ago
Can't be B: You can only use public IP address ranges in the access levels for IP-based allowlists. You cannot include an internal IP address in these allowlists. Internal IP addresses are associated with a VPC network, and VPC networks must be referenced by their containing project using an ingress or egress rule, or a service perimeter. https://cloud.google.com/vpc-service-controls/docs/use-access-levels#create_an_access_level:~:text=You%20can%20only,service%20perimeter.
upvoted 2 times
...
...
MisterHairy
1 year, 4 months ago
Selected Answer: C
The correct answer is C. You should create a perimeter bridge between Project A and Project B to allow the required communication between both projects. VPC Service Controls (SC) help to mitigate data exfiltration risks. They provide a security perimeter around Google Cloud resources to constrain data within a VPC and help protect it from being leaked. In this case, a resource in Project B needs to access a Pub/Sub topic in Project A, but Project A is within a VPC SC perimeter that’s blocking API access. A perimeter bridge can be created to allow communication between the two projects. This solution adheres to the principle of least privilege because it only allows the specific communication required, rather than changing the perimeter settings or access levels which could potentially allow more access than necessary. the principle of least privilege is about giving a user or service account only those privileges which are essential to perform its intended function. Options A and B could potentially grant more access than necessary, which is why they are not the best solutions. Option C, creating a perimeter bridge, allows just the specific communication required, adhering to the principle of least privilege.
upvoted 1 times
shmoeee
1 year ago
The question does not say that Project B is in a perimeter. Ans B can't be correct unless you're assuming
upvoted 2 times
...
...
desertlotus1211
1 year, 7 months ago
Answer B: https://cloud.google.com/vpc-service-controls/docs/use-access-levels#create_an_access_level To grant controlled access to protected Google Cloud resources in service perimeters from outside a perimeter, use access levels. The following examples explain how to create an access level using different conditions: IP address User and service accounts (principals) Device policy
upvoted 1 times
...
Andrei_Z
1 year, 7 months ago
Selected Answer: B
By creating an access level, you can specify precisely who in Project B should have access to subscribe to the Pub/Sub topic in Project A, ensuring that access is granted to only the necessary individuals or service accounts. This approach aligns more closely with the principle of least privilege.
upvoted 1 times
...
cyberpunk21
1 year, 7 months ago
Selected Answer: C
A. Can be correct but if we configure ingress policy all projects can access or ping this project so too much risk. C. perimeter can be created between two perimeters, but bridge can only be created between two perimeters they haven't mentioned that project b is in perimeter. we have to assume it.
upvoted 2 times
cyberpunk21
1 year, 7 months ago
My bad i choose option A, https://cloud.google.com/vpc-service-controls/docs/ingress-egress-rules#definition-ingress-egress
upvoted 3 times
...
...
anshad666
1 year, 7 months ago
Selected Answer: A
Ingress: Refers to any access by an API client from outside the service perimeter to resources within a service perimeter. Example: A Cloud Storage client outside a service perimeter calling Cloud Storage read, write, or copy operations on a Cloud Storage resource within the perimeter.
upvoted 2 times
...
Mithung30
1 year, 8 months ago
Answer is C. https://cloud.google.com/vpc-service-controls/docs/share-across-perimeters
upvoted 2 times
...
gcp4test
1 year, 8 months ago
Selected Answer: A
A - is correct Cant be C, bridge is between pramiter, but project B it is not in any pramiter
upvoted 3 times
mjcts
1 year, 2 months ago
This is the correct reason why the answer is A
upvoted 1 times
...
...

Question 221

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 221 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 221
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You define central security controls in your Google Cloud environment. For one of the folders in your organization, you set an organizational policy to deny the assignment of external IP addresses to VMs. Two days later, you receive an alert about a new VM with an external IP address under that folder.

What could have caused this alert?

  • A. The VM was created with a static external IP address that was reserved in the project before the organizational policy rule was set.
  • B. The organizational policy constraint wasn't properly enforced and is running in "dry run" mode.
  • C. A project level, the organizational policy control has been overwritten with an "allow" value.
  • D. The policy constraint on the folder level does not have any effect because of an "allow" value for that constraint on the organizational level.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Selected Answer: A
- :Enforcement of most organization policies is not retroactive - The policies are merged and the DENY value takes precedence (https://cloud.google.com/resource-manager/docs/organization-policy/understanding-hierarchy#reconciling_policy_conflicts)
upvoted 1 times
...
KLei
3 months, 2 weeks ago
Selected Answer: A
- :Enforcement of most organization policies is not retroactive - The policies are merged and the DENY value takes precedence (https://cloud.google.com/resource-manager/docs/organization-policy/understanding-hierarchy#reconciling_policy_conflicts)
upvoted 2 times
...
Pime13
4 months ago
Selected Answer: A
https://cloud.google.com/resource-manager/docs/organization-policy/creating-managing-policies#creating_and_editing_policies Enforcement of most organization policies is not retroactive. If a new organization policy sets a restriction on an action or state that a service is already in, the policy is considered to be in violation, but the service will not stop its original behavior. Organization policy constraints that are retroactive note this property in their description.
upvoted 2 times
...
BPzen
4 months, 1 week ago
Selected Answer: A
When you define an organizational policy in Google Cloud, it applies to future actions and configurations, not to resources that already exist or were configured before the policy was set. If a static external IP address had been reserved in the project prior to the policy being applied, it could be assigned to a new VM after the policy enforcement starts. This would result in a VM with an external IP address, despite the organizational policy. C. A project-level organizational policy control has been overwritten with an "allow" value. Organizational policies propagate from the top (organization) to the bottom (project), unless specifically overridden. However, the question specifies the policy was applied at the folder level, which would affect all projects under that folder. This is less likely unless explicitly overridden at the project level, which the question does not suggest.
upvoted 2 times
...
MoAk
4 months, 2 weeks ago
Selected Answer: B
Tricky one tbh. dry-run mode for org policies now exist and so technically speaking, answer B could now be the answer to the Q. Either way its between B or C in my opinion. https://cloud.google.com/resource-manager/docs/organization-policy/dry-run-policy
upvoted 3 times
...
shmoeee
1 year ago
"under that folder"...
upvoted 1 times
...
desertlotus1211
1 year, 2 months ago
Answer A: f a static external IP address was reserved before the organizational policy to deny the assignment of external IP addresses to VMs was enacted, creating a VM and attaching this pre-reserved static external IP address would not violate the policy.
upvoted 2 times
...
winston9
1 year, 2 months ago
Selected Answer: D
in this scenario, the alert is triggered because the VM creation violates the folder-level "deny" policy, but that restriction is nullified by the overriding "allow" value inherited from the organization-level policy.
upvoted 1 times
winston9
1 year, 2 months ago
I will change it to A, usually organization policy constraints are not retroactive, it could be retroactively enforced if properly labeled as such on the Organization Policy Constraints page, but the question does not mention this.
upvoted 1 times
...
...
MMNB2023
1 year, 4 months ago
Selected Answer: A
According to this link https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address#disableexternalip
upvoted 2 times
MMNB2023
1 year, 4 months ago
Sorry the right answer is C. We talk about a "new VM" in the question.
upvoted 1 times
...
...
MMNB2023
1 year, 4 months ago
I think A is correct answer. Because this policy organization is not retroactive. https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address#disableexternalip
upvoted 1 times
...
MisterHairy
1 year, 4 months ago
Selected Answer: C
The correct answer is C. At a project level, the organizational policy control has been overwritten with an “allow” value. Policies can be overridden at a lower level (like a project). So, if an “allow” policy was set at the project level, it would override the “deny” policy set at the folder level. This could allow a VM with an external IP address to be created under that folder, despite the folder-level policy. Changes to organizational policies can take time to propagate and be enforced across all resources, but in this case, the alert was received two days after the policy was set, which should have been sufficient time for the policy to take effect. Therefore, options A, B, and D are less likely.
upvoted 2 times
...
EVEGCP
1 year, 4 months ago
A:Enforcement of most organization policies is not retroactive. If a new organization policy sets a restriction on an action or state that a service is already in, the policy is considered to be in violation, but the service will not stop its original behavior.https://cloud.google.com/resource-manager/docs/organization-policy/creating-managing-policies#creating_and_editing_policies
upvoted 2 times
...
vividg
1 year, 6 months ago
https://cloud.google.com/resource-manager/docs/organization-policy/understanding-hierarchy#reconciling_policy_conflicts Says "The policies are merged and the DENY value takes precedence" So.. How can C be the answer?
upvoted 4 times
daidai75
1 year ago
This scenario happens when "inheritFromParent = true". If "inheritFromParent = false", the "reconciling_policy_conflicts" rule will not work.
upvoted 1 times
...
...
Xoxoo
1 year, 6 months ago
Selected Answer: C
Here's why option C is the likely cause: Overriding Policy at the Project Level: Google Cloud allows for policies to be set at different levels of the resource hierarchy, such as the organization, folder, or project level. If a policy is set at the organization or folder level to deny external IP addresses but is then overridden with an "allow" value at the project level, it would take precedence, allowing VMs within that project to have external IP addresses. Alert Trigger: When an organizational policy constraint is overridden at a lower level (e.g., project), it can lead to situations where the policy is not enforced as expected. This can result in alerts or notifications when policy violations occur.
upvoted 3 times
...
cyberpunk21
1 year, 7 months ago
Selected Answer: C
A. Even if IP created after org policy was set it wont allow to use it B. we can preview the org policy function using dry run (Preview mode) in this policy won't deny the usage, but it will notify. C. we cant put deny org policy at org policy and expect it will override with allow value
upvoted 3 times
...
Simon6666
1 year, 7 months ago
Selected Answer: C
C should be correct https://cloud.google.com/resource-manager/docs/organization-policy/understanding-hierarchy
upvoted 2 times
...
ymkk
1 year, 7 months ago
Selected Answer: C
C. A project level, the organizational policy control has been overwritten with an "allow" value.
upvoted 2 times
...

Question 222

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 222 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 222
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your company recently published a security policy to minimize the usage of service account keys. On-premises Windows-based applications are interacting with Google Cloud APIs. You need to implement Workload Identity Federation (WIF) with your identity provider on-premises.

What should you do?

  • A. Set up a workload identity pool with your corporate Active Directory Federation Service (ADFS). Configure a rule to let principals in the pool impersonate the Google Cloud service account.
  • B. Set up a workload identity pool with your corporate Active Directory Federation Service (ADFS). Let all principals in the pool impersonate the Google Cloud service account.
  • C. Set up a workload identity pool with an OpenID Connect (OIDC) service on the same machine. Configure a rule to let principals in the pool impersonate the Google Cloud service account.
  • D. Set up a workload identity pool with an OpenID Connect (OIDC) service on the same machine. Let all principals in the pool impersonate the Google Cloud service account.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Mithung30
Highly Voted 8 months, 1 week ago
A. Set up a workload identity pool with your corporate Active Directory Federation Service (ADFS). Configure a rule to let principals in the pool impersonate the Google Cloud service account. This is the best option because it allows you to control who can impersonate the Google Cloud service account.
upvoted 5 times
...
MMNB2023
Most Recent 4 months, 2 weeks ago
Selected Answer: A
The right answer including least privilege principe
upvoted 3 times
...
Xoxoo
6 months, 3 weeks ago
Selected Answer: A
Here's why option A is the preferred choice: Workload Identity Pool: Using your corporate ADFS for identity federation is a common and secure way to manage identities and access to Google Cloud resources. Configure a Rule: Configuring a rule in the workload identity pool allows you to specify which principals (users or entities) in your corporate ADFS can impersonate the Google Cloud service account. This approach adheres to the principle of least privilege by allowing only specific users or entities to impersonate the service account.
upvoted 3 times
...
cyberpunk21
7 months, 3 weeks ago
Selected Answer: A
A is correct, B is also correct, but it causes chaos.
upvoted 3 times
...
akg001
8 months ago
Selected Answer: A
A is correct
upvoted 4 times
...

Question 223

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 223 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 223
Topic #: 1
[All Professional Cloud Security Engineer Questions]

After completing a security vulnerability assessment, you learned that cloud administrators leave Google Cloud CLI sessions open for days. You need to reduce the risk of attackers who might exploit these open sessions by setting these sessions to the minimum duration.

What should you do?

  • A. Set the session duration for the Google session control to one hour.
  • B. Set the reauthentication frequency for the Google Cloud Session Control to one hour.
  • C. Set the organization policy constraint constraints/iam.allowServiceAccountCredentialLifetimeExtension to one hour.
  • D. Set the organization policy constraint constraints/iam.serviceAccountKeyExpiryHours to one hour and inheritFromParent to false.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Pime13
4 months ago
Selected Answer: B
https://support.google.com/a/answer/9368756?hl=en Reauthentication Frequency: Setting the reauthentication frequency ensures that users must re-authenticate after a specified period, in this case, one hour. This reduces the window of opportunity for an attacker to exploit an open session A. Session Duration: While setting the session duration can help, reauthentication frequency is more directly related to ensuring users re-authenticate regularly. C. Service Account Credential Lifetime: This constraint is specific to service account credentials and does not directly address user session durations. D. Service Account Key Expiry: Similar to option C, this focuses on service account keys rather than user session management.
upvoted 1 times
...
MoAk
4 months, 3 weeks ago
Selected Answer: B
As of late, it appears that answer B is the only correct answer. https://support.google.com/a/answer/7576830?hl=en&ref_topic=7556597&sjid=10540575594857625427-EU
upvoted 1 times
...
Bettoxicity
1 year ago
Selected Answer: D
D: Granular Control: This policy constraint specifically targets serviceAccountKeyExpiryHours, directly controlling how long service account credentials (used by the Cloud CLI) remain valid. Minimum Duration: Setting the expiry to one hour enforces session termination after that timeframe, mitigating the risk of open sessions being exploited. Inheritance Override: Using inheritFromParent: false ensures this policy applies to the specific organization, preventing accidental overrides from higher levels in the hierarchy. Why not B?: Reauthentication Frequency: This might prompt users to re-authenticate within the console but doesn't directly terminate open Cloud CLI sessions.
upvoted 1 times
...
MMNB2023
1 year, 4 months ago
Selected Answer: B
1 hour as min duration and max 24hours.
upvoted 3 times
...
MisterHairy
1 year, 4 months ago
Selected Answer: B
The best option would be B. Set the reauthentication frequency for the Google Cloud Session Control to one hour. This is because Google Cloud Session Control allows you to set a reauthentication frequency, which determines how often users are prompted to reauthenticate during their session. By setting this to one hour, you ensure that CLI sessions are only open for a maximum of one hour without reauthentication, reducing the risk of attackers exploiting these open sessions. Option A is incorrect because there is no such thing as a “Google session control”. Option C and D are related to service account keys and credential lifetime extension, not user sessions in the Google Cloud CLI.
upvoted 3 times
...
alvinlxw
1 year, 5 months ago
Selected Answer: B
https://cloud.google.com/blog/products/identity-security/improve-security-posture-with-time-bound-session-length
upvoted 1 times
...
ArizonaClassics
1 year, 6 months ago
B. Set the reauthentication frequency for the Google Cloud Session Control to one hour. Option B is the correct approach because by setting the reauthentication frequency to one hour, you're ensuring that any active sessions automatically require reauthentication after that time period, mitigating the risk associated with long-lived sessions.
upvoted 1 times
...
desertlotus1211
1 year, 7 months ago
Answer B: https://support.google.com/a/answer/9368756?hl=en Set session length for Google Cloud services Answers A & B are a play on words... In order to do session during (A), you must adjust the reauthenticate policy duration (B).
upvoted 1 times
...
GCBC
1 year, 7 months ago
Selected Answer: A
session length to 1 hour is good other options are disturbing and expiring or reauthenticate every hour is not good for user experience
upvoted 2 times
...
BR1123
1 year, 7 months ago
D. By setting the organization policy constraint constraints/iam.serviceAccountKeyExpiryHours to one hour and inheritFromParent to false, you are specifically controlling the duration for which the service account keys (credentials) are valid. This directly addresses the issue of open sessions and the risk of exploitation by ensuring that the credentials used for these sessions expire after a shorter time, reducing the window of opportunity for attackers. In summary, option D provides a more targeted approach to mitigating the risk posed by open Google Cloud CLI sessions by setting the service account key expiry duration to one hour and ensuring it doesn't inherit from parent policies.
upvoted 1 times
...
anshad666
1 year, 7 months ago
Selected Answer: B
https://support.google.com/a/answer/9368756?hl=en&ref_topic=7556597&sjid=4209356388025132107-AP
upvoted 3 times
...
cyberpunk21
1 year, 7 months ago
Selected Answer: A
A and B both satisfies the question but the effective and easy to do will be A, BTW B does the same job
upvoted 1 times
desertlotus1211
1 year, 7 months ago
B is what you need to do....
upvoted 1 times
...
...
RuchiMishra
1 year, 8 months ago
Selected Answer: B
https://support.google.com/a/answer/9368756?hl=en
upvoted 3 times
...
gcp4test
1 year, 8 months ago
Selected Answer: A
C,D serviceAccountKeyExpiryHours is for Service Account not human (users) - as in the question. B - reauthenticate it is not user frendly, to reautjhenticate user once every hour So correct is A.
upvoted 3 times
pfilourenco
1 year, 8 months ago
The session-length control settings affect sessions with all Google web properties that a user accesses while signed in. I think B is the most appropriated: " for Google Cloud tools, and how these controls interact with the parent session control on this page, see Set session length for Google Cloud services." https://support.google.com/a/answer/7576830?hl=en https://support.google.com/a/answer/9368756?hl=en
upvoted 3 times
...
...
Mithung30
1 year, 8 months ago
D. Set the organization policy constraint constraints/iam.serviceAccountKeyExpiryHours to one hour and inheritFromParent to false. This will set the default expiry time for service account keys to one hour and prevent the keys from being inherited from parent organizations. In this case, the best option is to set the organization policy constraint constraints/iam.serviceAccountKeyExpiryHours to one hour and inheritFromParent to false. This will ensure that all service account keys expire after one hour and cannot be inherited from parent organizations. This will help to reduce the risk of attackers who might exploit open sessions.
upvoted 2 times
...

Question 224

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 224 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 224
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You have numerous private virtual machines on Google Cloud. You occasionally need to manage the servers through Secure Socket Shell (SSH) from a remote location. You want to configure remote access to the servers in a manner that optimizes security and cost efficiency.

What should you do?

  • A. Create a site-to-site VPN from your corporate network to Google Cloud.
  • B. Configure server instances with public IP addresses. Create a firewall rule to only allow traffic from your corporate IPs.
  • C. Create a firewall rule to allow access from the Identity-Aware Proxy (IAP) IP range. Grant the role of an IAP-secured Tunnel User to the administrators.
  • D. Create a jump host instance with public IP. Manage the instances by connecting through the jump host.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Pime13
4 months ago
Selected Answer: C
C - https://cloud.google.com/iap#section-2
upvoted 1 times
...
MMNB2023
4 months, 2 weeks ago
Selected Answer: C
Using IAP is more secure and cost effective than Bastion VM (VM cost+ maintenace). Specially IAP is a managed security solution.
upvoted 1 times
...
ArizonaClassics
6 months, 3 weeks ago
C. Create a firewall rule to allow access from the Identity-Aware Proxy (IAP) IP range. Grant the role of an IAP-secured Tunnel User to the administrators. Google's Identity-Aware Proxy allows you to establish a secure and context-aware access to your VMs without using a traditional VPN. It's a cost-efficient and secure method, especially for occasional access. You can enforce identity and context-aware access controls, ensuring only authorized users can SSH into the VMs.
upvoted 1 times
...
anshad666
7 months, 2 weeks ago
Selected Answer: C
Typical use case for IAP
upvoted 3 times
...
cyberpunk21
7 months, 3 weeks ago
Selected Answer: A
I think only option A is cost effective. so, I choose option A
upvoted 1 times
...
Mithung30
8 months, 1 week ago
C. Create a firewall rule to allow access from the Identity-Aware Proxy (IAP) IP range. Grant the role of an IAP-secured Tunnel User to the administrators. This is a good option for organizations that want to use IAP to secure their remote access. IAP is a Google-managed service that provides a secure way to access Google Cloud resources from the internet. D. Create a jump host instance with public IP. Manage the instances by connecting through the jump host. This is a good option for organizations that want to have a secure way to manage their VMs without exposing them to the public internet. The jump host is a server that is exposed to the public internet and has access to the VMs. Administrators can connect to the jump host and then use it to manage the VMs. In this case, the best option is to create a jump host instance with public IP. This will allow administrators to manage the VMs securely without exposing them to the public internet. The jump host can be configured with a firewall rule to only allow traffic from trusted IP addresses. This will help to protect the VMs from unauthorized access.
upvoted 1 times
...
alkaloid
8 months, 1 week ago
Selected Answer: C
C - correct. With TCP forwarding, IAP can protect SSH and RDP access to your VMs hosted on Google Cloud. Your VM instances don't even need public IP addresses. https://cloud.google.com/iap#section-2
upvoted 3 times
...

Question 225

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 225 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 225
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization's record data exists in Cloud Storage. You must retain all record data for at least seven years. This policy must be permanent.

What should you do?

  • A. 1. Identify buckets with record data.
    2. Apply a retention policy, and set it to retain for seven years.
    3. Monitor the bucket by using log-based alerts to ensure that no modifications to the retention policy occurs.
  • B. 1. Identify buckets with record data.
    2. Apply a retention policy, and set it to retain for seven years.
    3. Remove any Identity and Access Management (IAM) roles that contain the storage buckets update permission.
  • C. 1. Identify buckets with record data.
    2. Enable the bucket policy only to ensure that data is retained.
    3. Enable bucket lock.
  • D. 1. Identify buckets with record data.
    2. Apply a retention policy and set it to retain for seven years.
    3. Enable bucket lock.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
cyberpunk21
7 months, 3 weeks ago
Selected Answer: D
If policy is not permanent the answer would have been A
upvoted 1 times
...
Mithung30
8 months, 1 week ago
D. https://cloud.google.com/storage/docs/bucket-lock
upvoted 1 times
...
pfilourenco
8 months, 1 week ago
Selected Answer: D
D is the correct
upvoted 2 times
...
alkaloid
8 months, 1 week ago
Selected Answer: D
D is the right choice https://cloud.google.com/storage/docs/bucket-lock
upvoted 2 times
...

Question 226

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 226 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 226
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization wants to protect all workloads that run on Compute Engine VM to ensure that the instances weren't compromised by boot-level or kernel-level malware. Also, you need to ensure that data in use on the VM cannot be read by the underlying host system by using a hardware-based solution.

What should you do?

  • A. 1. Use Google Shielded VM including secure boot, Virtual Trusted Platform Module (vTPM), and integrity monitoring.
    2. Create a Cloud Run function to check for the VM settings, generate metrics, and run the function regularly.
  • B. 1. Activate Virtual Machine Threat Detection in Security Command Center (SCC) Premium.
    2. Monitor the findings in SCC.
  • C. 1. Use Google Shielded VM including secure boot, Virtual Trusted Platform Module (vTPM), and integrity monitoring.
    2. Activate Confidential Computing.
    3. Enforce these actions by using organization policies.
  • D. 1. Use secure hardened images from the Google Cloud Marketplace.
    2. When deploying the images, activate the Confidential Computing option.
    3. Enforce the use of the correct images and Confidential Computing by using organization policies.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
crazycosmos
4 months, 1 week ago
Selected Answer: C
C fits the best
upvoted 1 times
...
MMNB2023
10 months, 3 weeks ago
Selected Answer: C
Confidential computing for data security in use.
upvoted 1 times
...
Andrei_Z
1 year, 1 month ago
Selected Answer: C
Confidential computing is about data in use not data at rest but C is the correct answer as there aren't any others that fit better
upvoted 1 times
...
rishi110196
1 year, 1 month ago
C is correct because questions says data should remain secure at rest which can only be done by Confidential Vms
upvoted 1 times
...
gcp4test
1 year, 2 months ago
Selected Answer: C
C it the best option
upvoted 2 times
...
pfilourenco
1 year, 2 months ago
Selected Answer: C
C is the correct
upvoted 2 times
...

Question 227

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 227 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 227
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are migrating your users to Google Cloud. There are cookie replay attacks with Google web and Google Cloud CLI SDK sessions on endpoint devices. You need to reduce the risk of these threats.

What should you do? (Choose two.)

  • A. Configure Google session control to a shorter duration.
  • B. Set an organizational policy for OAuth 2.0 access token with a shorter duration.
  • C. Set a reauthentication policy for Google Cloud services to a shorter duration.
  • D. Configure a third-party identity provider with session management.
  • E. Enforce Security Key Authentication with 2SV.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
i_am_robot
Highly Voted 1 year, 6 months ago
Selected Answer: A
Correct anwers are A & E. A. Configuring Google session control to a shorter duration reduces the time window in which an attacker can use a replayed cookie to gain unauthorized access, thereby enhancing security. E. Enforcing Security Key Authentication with 2-Step Verification (2SV) adds an additional layer of security by requiring users to verify their identity using a physical security key, making it more difficult for attackers to gain unauthorized access even if they have a replayed cookie.
upvoted 9 times
...
ymkk
Highly Voted 1 year, 7 months ago
B and E Set an organizational policy for OAuth 2.0 access token with a shorter duration is a good approach to reduce the time during which a stolen access token could be exploited. Shortening the access token duration helps mitigate the impact of cookie replay attacks. OAuth 2.0 access tokens are commonly used to authenticate API requests. By reducing their duration, you limit the time frame in which an attacker could potentially abuse a stolen token. Enforce Security Key Authentication with 2SV adds strong authentication to user sessions. Security keys are hardware-based tokens that provide strong authentication and help prevent unauthorized access, including cookie replay attacks. By requiring Security Key Authentication with 2SV (Two-Step Verification), you enhance the security of user accounts.
upvoted 5 times
...
BPzen
Most Recent 4 months, 1 week ago
Selected Answer: A
A and B A. Configure Google session control to a shorter duration. Reducing the session duration decreases the time a session cookie remains valid, thus limiting the risk of a replay attack. Shorter session times force more frequent reauthentication and can prevent attackers from leveraging stolen session cookies effectively. B. Set an organizational policy for OAuth 2.0 access token with a shorter duration. OAuth 2.0 access tokens are used for authenticating requests to Google Cloud APIs. By setting a shorter expiration time for these tokens, you reduce the window of opportunity for attackers to exploit stolen tokens in replay attacks.
upvoted 1 times
...
Mr_MIXER007
7 months ago
Missing missing missing
upvoted 1 times
...
Sundar_Pichai
7 months, 2 weeks ago
B&E, Limiting the session duration itself, doesn't do except give a malicious attacker a shorter time to do the 'bad thing', however, limiting the time that the cookie is actually usable could prevent an attacker from impersonating a user. Additionally, 2SV is nearly always a right answer.
upvoted 1 times
...
dija123
1 year ago
Selected Answer: A
A,C are correct
upvoted 1 times
...
acloudgurrru
1 year, 1 month ago
You shorten the session duration by setting the reauthentication policy so the answer is C and not A.
upvoted 1 times
...
rglearn
1 year, 6 months ago
Selected Answer: C
AC keeping shorter session and enforcing reauthentication after certain period of time will help to address the issue
upvoted 3 times
...
desertlotus1211
1 year, 7 months ago
The question is not about validating a user identity- it's about mitigating a risk of open sessions. Answers B&C are correct. Answer C is A.
upvoted 3 times
...
anshad666
1 year, 7 months ago
I will go for A and C A - For Google Web services like Gmail https://support.google.com/a/answer/9368756?hl=en C - for Google Cloud Services and SDK https://support.google.com/a/answer/9368756?hl=en Enforce Security Key Authentication with 2SV adds strong authentication to user sessions. but it doesn't help if the attacker has already gained access. To mitigate cookie replay attacks, a web application should: - Invalidate a session after it exceeds the predefined idle timeout, and after the user logs out. - Set the lifespan for the session to be as short as possible. - Encrypt the session data. - Have a mechanism to detect when a cookie is seen by multiple clients
upvoted 4 times
...
akg001
1 year, 8 months ago
A and E
upvoted 4 times
...
Mithung30
1 year, 8 months ago
A, C A. Configure Google session control to a shorter duration. This will make it more difficult for attackers to use stolen cookies to access user accounts, as the cookies will expire more quickly. C. Set a reauthentication policy for Google Cloud services to a shorter duration. This will also make it more difficult for attackers to use stolen cookies to access user accounts, as they will need to reauthenticate more frequently.
upvoted 3 times
cyberpunk21
1 year, 7 months ago
I don't A is good fit cuz we don't want users to lose their work because of short session duration.
upvoted 1 times
...
...
ppandher
1 year, 8 months ago
Options A, C, and D are not directly related to mitigating cookie replay attacks or enhancing security against such threats. They address different aspects of session control, reauthentication policy, and identity provider configuration, but they do not directly tackle the issue of cookie replay attacks. Therefore, the best choices in this scenario are B and E.
upvoted 2 times
...

Question 228

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 228 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 228
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You manage a mission-critical workload for your organization, which is in a highly regulated industry. The workload uses Compute Engine VMs to analyze and process the sensitive data after it is uploaded to Cloud Storage from the endpoint computers. Your compliance team has detected that this workload does not meet the data protection requirements for sensitive data. You need to meet these requirements:

• Manage the data encryption key (DEK) outside the Google Cloud boundary.
• Maintain full control of encryption keys through a third-party provider.
• Encrypt the sensitive data before uploading it to Cloud Storage.
• Decrypt the sensitive data during processing in the Compute Engine VMs.
• Encrypt the sensitive data in memory while in use in the Compute Engine VMs.

What should you do? (Choose two.)

  • A. Configure Customer Managed Encryption Keys to encrypt the sensitive data before it is uploaded to Cloud Storage, and decrypt the sensitive data after it is downloaded into your VMs.
  • B. Configure Cloud External Key Manager to encrypt the sensitive data before it is uploaded to Cloud Storage, and decrypt the sensitive data after it is downloaded into your VMs.
  • C. Create Confidential VMs to access the sensitive data.
  • D. Migrate the Compute Engine VMs to Confidential VMs to access the sensitive data.
  • E. Create a VPC Service Controls service perimeter across your existing Compute Engine VMs and Cloud Storage buckets.
Show Suggested Answer Hide Answer
Suggested Answer: BC 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Pime13
4 months ago
Selected Answer: BD
You must create a new VM instance to enable Confidential VM. Existing instances can't be converted to Confidential VM instances. https://cloud.google.com/confidential-computing/confidential-vm/docs/supported-configurations#limitations
upvoted 2 times
...
Zek
4 months ago
Selected Answer: BC
You must create a new VM instance to enable Confidential VM. Existing instances can't be converted to Confidential VM instances. https://cloud.google.com/confidential-computing/confidential-vm/docs/supported-configurations#limitations
upvoted 1 times
...
MoAk
4 months, 3 weeks ago
Selected Answer: BC
D is 100% wrong. you cannot migrate existing VMs to enable a confidential VM. https://cloud.google.com/confidential-computing/confidential-vm/docs/supported-configurations#limitations
upvoted 2 times
...
Mr_MIXER007
7 months ago
Selected Answer: BD
B and D go with
upvoted 1 times
...
Mr_MIXER007
7 months ago
B and D go with
upvoted 1 times
...
EVEGCP
1 year, 4 months ago
BC : Confidential VM does not support live migration. https://cloud.google.com/confidential-computing/confidential-vm/docs/creating-cvm-instance#considerations
upvoted 2 times
...
MisterHairy
1 year, 4 months ago
Selected Answer: BC
Correction. When enabling Confidential Computing, it must be done when the VM instance is first created. Therefore, the right answer is C. Create Confidential VMs to access the sensitive data is the more accurate choice.
upvoted 2 times
...
MisterHairy
1 year, 4 months ago
Selected Answer: BD
The correct choices are: B. Configure Cloud External Key Manager to encrypt the sensitive data before it is uploaded to Cloud Storage, and decrypt the sensitive data after it is downloaded into your VMs. Cloud External Key Manager allows you to use encryption keys stored outside of Google’s infrastructure, providing full control over the key material. D. Migrate the Compute Engine VMs to Confidential VMs to access the sensitive data. Confidential VMs offer a breakthrough technology that encrypts data in-use, allowing you to work on sensitive data sets without exposing the data to the rest of the system. Option C involves creating new Confidential VMs, but it’s more efficient to migrate the existing Compute Engine VMs to Confidential VMs as stated in Option D.
upvoted 1 times
mjcts
1 year, 2 months ago
As per documentation: "You can only enable Confidential Computing on a VM when you first create an instance" Therefore it's C not D
upvoted 3 times
...
...
gkarthik1919
1 year, 6 months ago
BC . Agree.
upvoted 1 times
...
i_am_robot
1 year, 6 months ago
Selected Answer: BD
To meet the specified data protection requirements for sensitive data, including managing the data encryption key (DEK) outside the Google Cloud boundary and encrypting the sensitive data in memory while in use in the Compute Engine VMs, you should: B. Configure Cloud External Key Manager to encrypt the sensitive data before it is uploaded to Cloud Storage, and decrypt the sensitive data after it is downloaded into your VMs. D. Migrate the Compute Engine VMs to Confidential VMs to access the sensitive data.
upvoted 2 times
...
ArizonaClassics
1 year, 6 months ago
B. Configure Cloud External Key Manager (EKM) to encrypt the sensitive data before it is uploaded to Cloud Storage, and decrypt the sensitive data after it is downloaded into your VMs. Migrate the Compute Engine VMs to Confidential VMs to access the sensitive data. Confidential VMs allow you to encrypt data in use (in memory). These VMs ensure that data remains encrypted when it's being used and processed. This aligns with the requirement to encrypt sensitive data in memory while in use in the Compute Engine VMs.
upvoted 1 times
...
desertlotus1211
1 year, 7 months ago
Answer B&C: You cannot migrate a regular CE VM to Confidential. You must a new Confidential VM, and then decommission the other one.
upvoted 2 times
...
ymkk
1 year, 7 months ago
Selected Answer: BC
B,C is the answer. Confidential VM does not support live migration. You can only enable Confidential Computing on a VM when you first create the instance. https://cloud.google.com/confidential-computing/confidential-vm/docs/creating-cvm-instance
upvoted 4 times
...
Andrei_Z
1 year, 7 months ago
Selected Answer: BC
I would go with BC as well
upvoted 2 times
...
cyberpunk21
1 year, 7 months ago
Selected Answer: BC
confidential VM doesn't support live migration.
upvoted 4 times
...
anshad666
1 year, 7 months ago
Selected Answer: BC
C because Confidential VM does not support live migration.
upvoted 1 times
...
akilaz
1 year, 7 months ago
Selected Answer: BC
That's right, no idea why BD is the correct answer.
upvoted 1 times
...

Question 229

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 229 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 229
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization wants to be General Data Protection Regulation (GDPR) compliant. You want to ensure that your DevOps teams can only create Google Cloud resources in the Europe regions.

What should you do?

  • A. Use Identity-Aware Proxy (IAP) with Access Context Manager to restrict the location of Google Cloud resources.
  • B. Use the org policy constraint 'Google Cloud Platform – Resource Location Restriction' on your Google Cloud organization node.
  • C. Use the org policy constraint 'Restrict Resource Service Usage' on your Google Cloud organization node.
  • D. Use Identity and Access Management (IAM) custom roles to ensure that your DevOps team can only create resources in the Europe regions.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
mjcts
8 months ago
Selected Answer: B
B. Use the org policy constraint 'Google Cloud Platform – Resource Location Restriction' on your Google Cloud organization node.
upvoted 1 times
...
b6f53d8
8 months, 1 week ago
Selected Answer: B
good answer,
upvoted 1 times
...
ssk119
8 months, 4 weeks ago
I will go with A; since requirement for access to devops only is met through IAP and Access context manager ensures jurisdictional requirements around data.
upvoted 1 times
...
pradoUA
1 year ago
Selected Answer: B
B. Use the org policy constraint 'Google Cloud Platform – Resource Location Restriction' on your Google Cloud organization node.
upvoted 1 times
...
pfilourenco
1 year, 2 months ago
Selected Answer: B
B is the correct.
upvoted 2 times
...
Mithung30
1 year, 2 months ago
Correct answer is B https://cloud.google.com/resource-manager/docs/organization-policy/defining-locations
upvoted 1 times
...
ppandher
1 year, 2 months ago
B. Use the org policy constraint 'Google Cloud Platform – Resource Location Restriction' on your Google Cloud organization node: This policy constraint allows you to restrict the regions where Google Cloud resources can be created within your organization. By setting this constraint, you can ensure that resources are only deployed in the Europe regions, aligning with GDPR requirements for data processing and storage.
upvoted 3 times
Yohanes411
11 months, 4 weeks ago
Wouldn't that affect everyone under the organization? The location restriction is supposed to be applied only to the devops team and I imagine there are other teams/groups within the organization as well.
upvoted 2 times
ppandher
11 months, 2 weeks ago
Should be D ?
upvoted 1 times
...
ppandher
11 months, 2 weeks ago
I think While custom IAM roles can control permissions within projects, they do not inherently enforce geographic location restrictions on resource creation. Your thoughts ?
upvoted 1 times
...
...
...

Question 230

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 230 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 230
Topic #: 1
[All Professional Cloud Security Engineer Questions]

For data residency requirements, you want your secrets in Google Clouds Secret Manager to only have payloads in europe-west1 and europe-west4. Your secrets must be highly available in both regions.

What should you do?

  • A. Create your secret with a user managed replication policy, and choose only compliant locations.
  • B. Create your secret with an automatic replication policy, and choose only compliant locations.
  • C. Create two secrets by using Terraform, one in europe-west1 and the other in europe-west4.
  • D. Create your secret with an automatic replication policy, and create an organizational policy to deny secret creation in non-compliant locations.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
pfilourenco
Highly Voted 1 year, 8 months ago
Selected Answer: A
A is the correct. https://cloud.google.com/secret-manager/docs/choosing-replication#user-managed
upvoted 6 times
...
Pime13
Most Recent 4 months ago
Selected Answer: A
B. Automatic Replication Policy: This does not allow you to specify locations, so it wouldn't meet your data residency requirements. C. Two Secrets with Terraform: This approach is more complex and less efficient than using a user managed replication policy. D. Automatic Replication with Organizational Policy: This would not provide the control needed to ensure secrets are only in the specified regions.
upvoted 1 times
...
MoAk
4 months, 3 weeks ago
Selected Answer: A
A is correct as per https://cloud.google.com/secret-manager/docs/overview#:~:text=Ensure%20high%20availability%20and%20disaster,regardless%20of%20their%20geographic%20location.
upvoted 1 times
...
desertlotus1211
1 year, 2 months ago
Answer B: Here's the rationale for this choice: Secret Manager offers automatic replication for secrets, ensuring high availability by default. When you create a secret with an automatic replication policy, it automatically replicates the secret's data to multiple regions for redundancy. By choosing only compliant locations (europe-west1 and europe-west4) in your automatic replication policy, you enforce that the secret's data is stored only in those two regions, meeting your data residency requirements.
upvoted 1 times
...
iEM4D
1 year, 2 months ago
Selected Answer: A
https://cloud.google.com/secret-manager/docs/choosing-replication#user-managed
upvoted 1 times
...
ArizonaClassics
1 year, 6 months ago
A. Create your secret with a user managed replication policy, and choose only compliant locations. Here's why: User-managed replication lets you explicitly specify the secret's regions of replication, which aligns with the requirement to have payloads only in europe-west1 and europe-west4.
upvoted 1 times
...
Mithung30
1 year, 8 months ago
Correct answer is A. https://cloud.google.com/secret-manager/docs/choosing-replication?_ga=2.216110614.-1813351517.1690289784
upvoted 1 times
...
alkaloid
1 year, 8 months ago
ChatGPT-3.5 proposes B instead. I'll go with A https://www.youtube.com/watch?v=9KWGRSVZtFU&t=335s
upvoted 2 times
...
kapara
1 year, 8 months ago
from ChatGPT-4: The correct answer is A. Create your secret with a user-managed replication policy, and choose only compliant locations. In Google Cloud's Secret Manager, secrets with a user-managed replication policy are replicated only in the user-specified locations. This can be used to ensure data residency requirements are met, as the secret data (payloads) will not be stored or replicated outside of the regions selected in the policy. The automatic replication policy option (B and D) would not work because it replicates data across all regions in Google Cloud, which may violate the data residency requirements. Creating two secrets using Terraform (C) in different regions could work from a data residency standpoint, but it could lead to management issues as you would have two separate secrets to manage instead of one.
upvoted 2 times
...

Question 231

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 231 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 231
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are migrating an application into the cloud. The application will need to read data from a Cloud Storage bucket. Due to local regulatory requirements, you need to hold the key material used for encryption fully under your control and you require a valid rationale for accessing the key material.

What should you do?

  • A. Encrypt the data in the Cloud Storage bucket by using Customer Managed Encryption Keys. Configure an IAM deny policy for unauthorized groups.
  • B. Generate a key in your on-premises environment to encrypt the data before you upload the data to the Cloud Storage bucket. Upload the key to the Cloud Key Management Service (KMS). Activate Key Access Justifications (KAJ) and have the external key system reject unauthorized accesses.
  • C. Encrypt the data in the Cloud Storage bucket by using Customer Managed Encryption Keys backed by a Cloud Hardware Security Module (HSM). Enable data access logs.
  • D. Generate a key in your on-premises environment and store it in a Hardware Security Module (HSM) that is managed on-premises. Use this key as an external key in the Cloud Key Management Service (KMS). Activate Key Access Justifications (KAJ) and set the external key system to reject unauthorized accesses.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Pime13
4 months ago
Selected Answer: D
External key - means: Cloud External Key Manager Access Justifications - it is part a Cloud External Key Manager
upvoted 1 times
...
Sundar_Pichai
7 months, 2 weeks ago
Selected Answer: D
"Provide justification for key usage" is your hint in this question. That leaves B or D. You can't upload custom keys to KMS. D.
upvoted 1 times
...
MisterHairy
1 year, 4 months ago
Selected Answer: D
The correct answer is D. Generate a key in your on-premises environment and store it in a Hardware Security Module (HSM) that is managed on-premises. Use this key as an external key in the Cloud Key Management Service (KMS). Activate Key Access Justifications (KAJ) and set the external key system to reject unauthorized accesses. This approach allows you to maintain full control over the key material used for encryption, as the key is generated and stored in an on-premises HSM. By using this key as an external key in Cloud KMS, you can leverage Google Cloud’s key management capabilities while still maintaining control over the key material. Activating Key Access Justifications provides a valid rationale for accessing the key material, as it allows you to monitor and justify each attempt to use the key.
upvoted 2 times
...
ArizonaClassics
1 year, 6 months ago
D. Generate a key in your on-premises environment and store it in a Hardware Security Module (HSM) that is managed on-premises. Use this key as an external key in the Cloud Key Management Service (KMS). Activate Key Access Justifications (KAJ) and set the external key system to reject unauthorized accesses. This is the correct approach for the following reasons: By generating a key in your on-premises environment and storing it in an HSM that you manage, you're ensuring that the key material is fully under your control. Using the key as an external key in Cloud KMS allows you to use the key with Google Cloud services without having the key stored on Google Cloud. Activating Key Access Justifications (KAJ) provides a reason every time the key is accessed, and you can configure the external key system to reject unauthorized access attempts.
upvoted 1 times
...
anshad666
1 year, 7 months ago
Selected Answer: D
D- key material used for encryption fully under your control and you require a valid rationale for accessing the key material
upvoted 1 times
...
ymkk
1 year, 7 months ago
Selected Answer: D
Option D meets the key control requirements and ensures regulatory compliance.
upvoted 1 times
...
akg001
1 year, 8 months ago
Selected Answer: D
D looks correct.
upvoted 1 times
...
gcp4test
1 year, 8 months ago
Selected Answer: D
External key - means: Cloud External Key Manager Access Justifications - it is part a Cloud External Key Manager
upvoted 4 times
...

Question 232

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 232 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 232
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization uses the top-tier folder to separate application environments (prod and dev). The developers need to see all application development audit logs, but they are not permitted to review production logs. Your security team can review all logs in production and development environments. You must grant Identity and Access Management (IAM) roles at the right resource level for the developers and security team while you ensure least privilege.

What should you do?

  • A. 1. Grant logging.viewer role to the security team at the organization resource level.
    2. Grant logging.viewer role to the developer team at the folder resource level that contains all the dev projects.
  • B. 1. Grant logging.viewer role to the security team at the organization resource level.
    2. Grant logging.admin role to the developer team at the organization resource level.
  • C. 1. Grant logging.admin role to the security team at the organization resource level.
    2. Grant logging.viewer role to the developer team at the folder resource level that contains all the dev projects.
  • D. 1. Grant logging.admin role to the security team at the organization resource level.
    2. Grant logging.admin role to the developer team at the organization resource level.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
7f97f9f
1 month, 2 weeks ago
Selected Answer: A
The security team only needs to view logs, not manage log resources. logging.admin grants unnecessary permissions.
upvoted 1 times
...
Kmkz83510
3 months, 3 weeks ago
Selected Answer: C
Security team needs access to ALL logs. The only way they'll get that is with logging.admin. logging.viewer would not provide data access logs.
upvoted 1 times
...
Bettoxicity
6 months, 1 week ago
Selected Answer: A
A is correct!
upvoted 1 times
...
ale183
10 months, 3 weeks ago
A is correct , least privilege access.
upvoted 2 times
...
MisterHairy
10 months, 3 weeks ago
Selected Answer: A
Grant logging.viewer role to the security team at the organization resource level. This allows the security team to view all logs in both production and development environments. Grant logging.viewer role to the developer team at the folder resource level that contains all the dev projects. This allows the developers to view all application development audit logs, but not the production logs, ensuring least privilege.
upvoted 1 times
...

Question 233

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 233 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 233
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You manage a fleet of virtual machines (VMs) in your organization. You have encountered issues with lack of patching in many VMs. You need to automate regular patching in your VMs and view the patch management data across multiple projects.

What should you do? (Choose two.)

  • A. View patch management data in VM Manager by using OS patch management.
  • B. View patch management data in Artifact Registry.
  • C. View patch management data in a Security Command Center dashboard.
  • D. Deploy patches with Security Command Genter by using Rapid Vulnerability Detection.
  • E. Deploy patches with VM Manager by using OS patch management.
Show Suggested Answer Hide Answer
Suggested Answer: AE 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Pime13
4 months ago
Selected Answer: AE
A, E - https://cloud.google.com/compute/vm-manager/docs/patch https://cloud.google.com/compute/vm-manager/docs/patch/view-patch-summary#patch-summary
upvoted 2 times
...
BPzen
4 months, 2 weeks ago
Selected Answer: AE
Why Option A is Correct: VM Manager OS Patch Management: VM Manager provides a centralized view of patch status and compliance for all VMs across multiple projects. Patch management data includes details about which updates are available, installed, or missing for your virtual machines. Why Option E is Correct: Automated Patching with VM Manager: You can configure patch schedules to automate the application of patches to VMs across your organization. VM Manager ensures that patches are applied regularly, reducing the risk of vulnerabilities from outdated software.
upvoted 1 times
...
nah99
4 months, 2 weeks ago
Selected Answer: AE
This shows multiple projects https://cloud.google.com/compute/vm-manager/docs/patch/view-patch-summary#patch-summary
upvoted 1 times
...
MoAk
4 months, 3 weeks ago
Selected Answer: AE
A - https://cloud.google.com/compute/vm-manager/docs/patch E - https://cloud.google.com/compute/vm-manager/docs/patch
upvoted 2 times
...
SQLbox
6 months, 3 weeks ago
. View patch management data in VM Manager by using OS patch management. Why? VM Manager's OS patch management feature provides a centralized view of patch compliance across your VMs, including multiple projects. It allows you to schedule and monitor patches, helping you ensure that your VMs are regularly patched and secure. E. Deploy patches with VM Manager by using OS patch management. Why? VM Manager's OS patch management allows you to automate the deployment of patches to your VMs. You can set patching schedules, define maintenance windows, and apply patches across multiple VMs in a consistent and automated manner.
upvoted 1 times
...
Mr_MIXER007
7 months ago
Selected Answer: AE
A and E go with this
upvoted 1 times
...
irmingard_examtopics
12 months ago
Selected Answer: CE
https://cloud.google.com/security-command-center/docs/concepts-security-sources#vm_manager Findings simplify the process of using VM Manager's Patch Compliance feature, which is in preview. The feature lets you conduct patch management at the organization level across all of your projects. Currently, VM Manager supports patch management at the single project level.
upvoted 2 times
...
MFay
12 months ago
A and E. he Patch feature has two main components: Patch compliance reporting, which provides insights on the patch status of your VM instances across Windows and Linux distributions. Along with the insights, you can also view recommendations for your VM instances. Patch deployment, which automates the operating system and software patch update process. A patch deployment schedules patch jobs. A patch job runs across VM instances and applies patches.
upvoted 1 times
...
glb2
1 year ago
Selected Answer: CE
C and E, because we need to view the patch management data across multiple projects needs
upvoted 1 times
nah99
4 months, 2 weeks ago
You can, see this. Therefore, A & E is better. https://cloud.google.com/compute/vm-manager/docs/patch/view-patch-summary#patch-summary
upvoted 1 times
...
...
[Removed]
1 year, 3 months ago
Selected Answer: CE
CE. VM Manager is not cross-project.
upvoted 4 times
...
gical
1 year, 3 months ago
A is wrong because according https://niveussolutions.com/mastering-os-patching-in-vm-manager-cloud-native-solution/ "VM Manager’s patching reports are specific to individual projects. As a result, there is no direct mechanism to consolidate or aggregate the patch compliance status of all projects within an organization."
upvoted 3 times
...
ale183
1 year, 4 months ago
A and D https://cloud.google.com/compute/docs/os-patch-management
upvoted 1 times
...
MisterHairy
1 year, 4 months ago
Selected Answer: AE
A. View patch management data in VM Manager by using OS patch management. VM Manager’s OS patch management feature allows you to view patch compliance and deployment data across multiple projects. E. Deploy patches with VM Manager by using OS patch management. VM Manager’s OS patch management feature also allows you to automate the deployment of patches to your VMs.
upvoted 2 times
...

Question 234

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 234 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 234
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization uses BigQuery to process highly sensitive, structured datasets. Following the “need to know” principle, you need to create the Identity and Access Management (IAM) design to meet the needs of these users:
• Business user: must access curated reports.
• Data engineer: must administrate the data lifecycle in the platform.
• Security operator: must review user activity on the data platform.

What should you do?

  • A. Configure data access log for BigQuery services, and grant Project Viewer role to security operator.
  • B. Set row-based access control based on the “region” column, and filter the record from the United States for data engineers.
  • C. Create curated tables in a separate dataset and assign the role roles/bigquery.dataViewer.
  • D. Generate a CSV data file based on the business user's needs, and send the data to their email addresses.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
MisterHairy
Highly Voted 1 year, 4 months ago
Selected Answer: C
Correction. The most correct answer would be C. Create curated tables in a separate dataset and assign the role roles/bigquery.dataViewer. This option directly addresses the needs of the business user who must access curated reports. By creating curated tables in a separate dataset, you can control access to specific data. Assigning the roles/bigquery.dataViewer role allows the business user to view the data in BigQuery. While option A is also a good practice for a security operator, it doesn’t directly address the specific needs of the users mentioned in the question as effectively as option C does. Therefore, if you can only choose one answer, option C would be the most correct.
upvoted 7 times
...
JohnDohertyDoe
Most Recent 3 months, 1 week ago
Selected Answer: C
The answers do not fit all the requirements. But the one that addresses is C. A is not right, as even if Data Access logs are enabled, they cannot be viewed by the Security Operator role with `viewer`, they would need `logging.privateLogViewer`.
upvoted 1 times
...
Mr_MIXER007
7 months ago
Selected Answer: C
C. Create curated tables in a separate dataset and assign the role roles/bigquery.dataViewer.
upvoted 1 times
...
Nkay17
10 months, 1 week ago
Answer C: Data Access audit logs—except for BigQuery Data Access audit logs—are disabled by default because audit logs can be quite large.
upvoted 1 times
...
Bettoxicity
1 year ago
Selected Answer: A
A is the correct!
upvoted 1 times
...
dija123
1 year, 1 month ago
Selected Answer: A
Option A (data access logs and Project Viewer for security) offers a simpler path to achieve "need to know" for business users and data engineers while providing the security operator with visibility into user activity.
upvoted 1 times
...
dija123
1 year, 1 month ago
Selected Answer: A
Sorry I wanted to vote for A
upvoted 1 times
...
dija123
1 year, 1 month ago
Selected Answer: C
Both Option A and Option C can be effective for different reasons. Option A offers simplicity and aligns with "need to know" for most users, while Option C provides more granular control over data access but requires additional configuration.
upvoted 1 times
...
MisterHairy
1 year, 4 months ago
Selected Answer: A
A. Configure data access log for BigQuery services, and grant Project Viewer role to security operator. This allows the security operator to review user activity on the data platform. C. Create curated tables in a separate dataset and assign the role roles/bigquery.dataViewer. This allows the business user to access curated reports. The data engineer can administrate the data lifecycle in the platform.
upvoted 1 times
...

Question 235

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 235 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 235
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are setting up a new Cloud Storage bucket in your environment that is encrypted with a customer managed encryption key (CMEK). The CMEK is stored in Cloud Key Management Service (KMS), in project “prj-a”, and the Cloud Storage bucket will use project “prj-b”. The key is backed by a Cloud Hardware Security Module (HSM) and resides in the region europe-west3. Your storage bucket will be located in the region europe-west1. When you create the bucket, you cannot access the key, and you need to troubleshoot why.

What has caused the access issue?

  • A. A firewall rule prevents the key from being accessible.
  • B. Cloud HSM does not support Cloud Storage.
  • C. The CMEK is in a different project than the Cloud Storage bucket.
  • D. The CMEK is in a different region than the Cloud Storage bucket.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
MoAk
4 months, 2 weeks ago
Selected Answer: D
https://cloud.google.com/kms/docs/cmek#when-use-cmek
upvoted 1 times
...
Potatoe2023
11 months, 2 weeks ago
Selected Answer: D
D https://cloud.google.com/kms/docs/cmek#cmek_integrations
upvoted 2 times
...
irmingard_examtopics
12 months ago
Selected Answer: D
You must create the Cloud KMS key ring in the same location as the data you intend to encrypt. For example, if your bucket is located in US-EAST1, any key ring used for encrypting objects in that bucket must also be created in US-EAST1. https://cloud.google.com/storage/docs/encryption/customer-managed-keys#restrictions
upvoted 4 times
...
Bettoxicity
1 year ago
Selected Answer: C
CMEK Project Mismatch: By default, CMEKs can only be accessed by services within the same GCP project where the key resides (prj-a in this case). Your Cloud Storage bucket is in a different project (prj-b). Why not D?: CMEK Region Disparity: CMEKs can be accessed from any region within GCP, so the difference between europe-west3 (CMEK location) and europe-west1 (bucket location) shouldn't be the primary cause.
upvoted 1 times
...
dija123
1 year, 1 month ago
Selected Answer: C
By default, Google Cloud projects operate in isolation. Resources in one project cannot automatically access resources in another project, even within the same region. This security principle prevents unauthorized access to sensitive data or actions.
upvoted 1 times
...
i_am_robot
1 year, 3 months ago
Selected Answer: D
The access issue is caused by the fact that the CMEK is in a different region than the Cloud Storage bucket. According to the Google Cloud documentation, the location of the Cloud KMS key must match the storage location of the resource it is intended to encrypt. Since the CMEK resides in the region europe-west3 and the storage bucket is located in the region europe-west1, this mismatch is the reason why the key cannot be accessed when creating the bucket. Therefore, the correct answer is: D. The CMEK is in a different region than the Cloud Storage bucket
upvoted 4 times
...
NaikMN
1 year, 4 months ago
D https://cloud.google.com/sql/docs/mysql/cmek
upvoted 1 times
dija123
1 year ago
this link is about sql not Cloud storage, Cloud Storage with CMEK is more flexible regarding regions.
upvoted 1 times
...
...
MisterHairy
1 year, 4 months ago
Selected Answer: D
The correct answer is D. The CMEK is in a different region than the Cloud Storage bucket. When you use a customer-managed encryption key (CMEK) to secure a Cloud Storage bucket, the key and the bucket must be located in the same region. In this case, the key is in europe-west3 and the bucket is in europe-west1, which is why you’re unable to access the key.
upvoted 3 times
...

Question 236

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 236 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 236
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are deploying regulated workloads on Google Cloud. The regulation has data residency and data access requirements. It also requires that support is provided from the same geographical location as where the data resides.

What should you do?

  • A. Enable Access Transparency Logging.
  • B. Deploy Assured Workloads.
  • C. Deploy resources only to regions permitted by data residency requirements.
  • D. Use Data Access logging and Access Transparency logging to confirm that no users are accessing data from another region.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Pime13
4 months ago
Selected Answer: B
https://cloud.google.com/assured-workloads/docs/overview
upvoted 1 times
...
Crotofroto
1 year, 3 months ago
Selected Answer: B
Assured Workloads is used to deploy regulated workloads. https://cloud.google.com/assured-workloads/docs/overview
upvoted 2 times
...
i_am_robot
1 year, 3 months ago
Selected Answer: B
We should deploy Assured Workloads. Assured Workloads helps businesses in regulated sectors meet compliance requirements by providing a secure and compliant environment with features like data residency controls for specific compliance types, data and personnel access controls, and real-time monitoring for compliance violations. It ensures that only Google Cloud support personnel meeting specific geographical locations and personnel conditions support customers' workloads. We can select the regulatory framework you need to follow and Assured Workloads will automatically configure and deploy the controls needed to help meet your requirements.
upvoted 3 times
...
NaikMN
1 year, 4 months ago
B https://cloud.google.com/security/products/assured-workloads?hl=en
upvoted 2 times
...
MisterHairy
1 year, 4 months ago
Selected Answer: B
The correct answer is B. Deploy Assured Workloads. Assured Workloads for Google Cloud allows you to deploy regulated workloads with data residency, access, and support requirements. It helps you configure your environment in a manner that aligns with specific compliance frameworks and standards.
upvoted 2 times
...

Question 237

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 237 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 237
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization wants full control of the keys used to encrypt data at rest in their Google Cloud environments. Keys must be generated and stored outside of Google and integrate with many Google Services including BigQuery.

What should you do?

  • A. Use customer-supplied encryption keys (CSEK) with keys generated on trusted external systems. Provide the raw CSEK as part of the API call.
  • B. Create a KMS key that is stored on a Google managed FIPS 140-2 level 3 Hardware Security Module (HSM). Manage the Identity and Access Management (IAM) permissions settings, and set up the key rotation period.
  • C. Use Cloud External Key Management (EKM) that integrates with an external Hardware Security Module (HSM) system from supported vendors.
  • D. Create a Cloud Key Management Service (KMS) key with imported key material. Wrap the key for protection during import. Import the key generated on a trusted system in Cloud KMS.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Pime13
4 months ago
Selected Answer: C
https://cloud.google.com/assured-workloads/docs/overview
upvoted 1 times
...
Mr_MIXER007
7 months ago
Selected Answer: C
Use Cloud External Key Management (EKM) that integrates with an external Hardware Security Module (HSM) system from supported vendors
upvoted 1 times
...
AgoodDay
7 months, 3 weeks ago
Selected Answer: C
agree with c
upvoted 1 times
...
Bettoxicity
1 year ago
Selected Answer: C
C. -Full Key Control: Cloud EKM allows you to leverage an external HSM, providing complete control over key generation and storage outside of Google's infrastructure. This satisfies your organization's key control requirement. -Google Service Integration: Cloud EKM integrates seamlessly with numerous Google Services, including BigQuery. You can use these external keys for encrypting data at rest within those services.
upvoted 1 times
...
dija123
1 year, 1 month ago
Selected Answer: C
Agree with C
upvoted 1 times
...
NaikMN
1 year, 4 months ago
C https://cloud.google.com/kms/docs/ekm
upvoted 1 times
...
MisterHairy
1 year, 4 months ago
Selected Answer: C
The correct answer is C. Use Cloud External Key Management (EKM) that integrates with an external Hardware Security Module (HSM) system from supported vendors. Cloud EKM allows you to use encryption keys that are stored and managed in a third-party key management system deployed outside of Google’s infrastructure. This gives your organization full control over the keys used to encrypt data at rest in Google Cloud environments, including BigQuery.
upvoted 1 times
...

Question 238

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 238 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 238
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your company is concerned about unauthorized parties gaining access to the Google Cloud environment by using a fake login page. You must implement a solution to protect against person-in-the-middle attacks.

Which security measure should you use?

  • A. Security key
  • B. Google prompt
  • C. Text message or phone call code
  • D. Google Authenticator application
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Pime13
4 months ago
Selected Answer: A
Key Differences: Security Key (A): Uses cryptographic proof of identity and the FIDO standard, making it highly resistant to phishing and person-in-the-middle attacks. It requires physical possession of the key, adding an extra layer of security. Google Authenticator (D): Generates time-based one-time passwords (TOTP) that are more secure than SMS codes but can still be vulnerable to phishing if the attacker manages to intercept the code.
upvoted 1 times
...
3d9563b
8 months, 3 weeks ago
Selected Answer: A
To mitigate the risk of man-in-the-middle attacks and enhance the security of your Google Cloud environment, security keys provide the highest level of protection by using strong cryptographic methods and requiring physical access for authentication.
upvoted 1 times
...
Bettoxicity
1 year ago
Selected Answer: D
- MFA: Google Authenticator is a MFA tool that generates unique, time-based one-time passcodes (OTP) on your mobile device. Even if an attacker steals your login credentials, they wouldn't have the valid OTP generated by the Google Authenticator app, significantly reducing the risk of unauthorized access. - Out-of-band Authentication: MFA with Google Authenticator provides an extra layer of security because the verification code is generated on a separate device (your phone) rather than being sent via SMS or a phone call, which can be intercepted in person-in-the-middle attacks. Why not A?: Security keys offer strong two-factor authentication, but they require physical possession of the key, which might not be suitable for all situations.
upvoted 1 times
...
dija123
1 year, 1 month ago
Selected Answer: A
A. Security key
upvoted 1 times
...
Crotofroto
1 year, 3 months ago
Selected Answer: A
A is the only one that validates physically the person who is trying to access.
upvoted 1 times
...
MisterHairy
1 year, 4 months ago
Selected Answer: A
The correct answer is A. Security key. A security key is a physical device that you can use for two-step verification, providing an additional layer of security for your Google Account. Security keys can defend against phishing and man-in-the-middle attacks, making your login process more secure.
upvoted 2 times
...

Question 239

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 239 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 239
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You control network traffic for a folder in your Google Cloud environment. Your folder includes multiple projects and Virtual Private Cloud (VPC) networks. You want to enforce on the folder level that egress connections are limited only to IP range 10.58.5.0/24 and only from the VPC network “dev-vpc”. You want to minimize implementation and maintenance effort.

What should you do?

  • A. 1. Leave the network configuration of the VMs in scope unchanged.
    2. Create a new project including a new VPC network “new-vpc”.
    3. Deploy a network appliance in “new-vpc” to filter access requests and only allow egress connections from “dev-vpc” to 10.58.5.0/24.
  • B. 1. Leave the network configuration of the VMs in scope unchanged.
    2. Enable Cloud NAT for “dev-vpc” and restrict the target range in Cloud NAT to 10.58.5.0/24.
  • C. 1. Attach external IP addresses to the VMs in scope.
    2. Define and apply a hierarchical firewall policy on folder level to deny all egress connections and to allow egress to IP range 10.58.5.0/24 from network dev-vpc.
  • D. 1. Attach external IP addresses to the VMs in scope.
    2. Configure a VPC Firewall rule in “dev-vpc” that allows egress connectivity to IP range 10.58.5.0/24 for all source addresses in this network.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
BPzen
4 months, 1 week ago
Selected Answer: C
Hierarchical Firewall Policy: These policies are defined at the organization or folder level and are inherited by all projects under the folder. You can use this to enforce a rule that allows egress traffic only to the specific IP range (10.58.5.0/24) from the dev-vpc network while blocking all other egress traffic. This minimizes ongoing maintenance because the policy applies automatically to all resources in the folder. External IP Addresses: By attaching external IP addresses to the VMs, you ensure they can communicate outside the VPC, subject to the egress policies defined at the folder level.
upvoted 2 times
...
MoAk
4 months, 2 weeks ago
Selected Answer: C
hmm this is a tricky one. between B and C i am leaning more towards C but only because of the wording in the Q itself, specifically 'enforce on the folder level'. For me all options are pants but I feel the Q is intending to test the knowledge about hierarchical firewall policies. Further, cloud NAT itself would not be a selected product to 'enforce' controls intended by the use case in this Q.
upvoted 1 times
...
Mr_MIXER007
7 months ago
Selected Answer: C
Cloud NAT is primarily for providing internet access to instances in private subnets. It doesn't offer the granular control needed to restrict egress traffic based on source VPC networks
upvoted 1 times
...
3d9563b
8 months, 3 weeks ago
Selected Answer: C
Applying a hierarchical firewall policy at the folder level ensures centralized control of egress traffic across all networks and projects within the folder, minimizing implementation and maintenance efforts while enforcing the required network traffic constraints.
upvoted 1 times
...
pico
10 months, 4 weeks ago
Selected Answer: B
But I'm not agree 100% with any of them. B & C are the less worst but not the good ones. C is not complain with: on the folder level B is not complain with: minimize implementation and maintenance effort because of the add external ip addresses to the VMs step
upvoted 1 times
...
Bettoxicity
1 year ago
Selected Answer: C
-Folder-Level Policy: A hierarchical firewall policy applied at the folder level ensures consistent enforcement across all VPC networks within that folder. This simplifies management compared to individual project or VPC configurations. -Deny All Egress with Allow Rule: Setting a "deny all egress" rule as the default policy at the folder level strengthens security by explicitly blocking outbound traffic. A separate rule specifically allows egress to the desired IP range (10.58.5.0/24) from the "dev-vpc" network, meeting your requirements. -No VM Configuration Changes: This approach avoids modifying individual VM network configurations, reducing complexity and potential errors.
upvoted 1 times
...
dija123
1 year, 1 month ago
Selected Answer: B
allowing egress to the entire 10.58.5.0/24 network does not make any sense, enabling Cloud NAT for "dev-vpc" with the target range restricted to 10.58.5.0/24 provides a straightforward and efficient way to enforce egress connections on the folder level, meeting your criteria of minimal implementation and maintenance effort.
upvoted 2 times
...
adb4007
1 year, 2 months ago
Selected Answer: C
In my opinion the less worth option is C. A is wrong because use an other VPC in other Network cannot help to filter egress access B is wrong for me because NAT doesn't allow us to limit access even NAT is could be make between VPC. D by default all egress connections are allow add a rule make no change for me. in C you make a rule applie on all folder that deny egress by default and allow the source network as expected. I don't understand the fact of add a public ip adress that don't help for me but it is not blocking.
upvoted 1 times
...
b6f53d8
1 year, 2 months ago
Selected Answer: B
Why not B ?
upvoted 3 times
b6f53d8
1 year, 2 months ago
But mentioned IP range is internal, so why we need External IP ? In my opinion all answers are bad
upvoted 3 times
winston9
1 year, 2 months ago
NAT can be used to route internal traffic to other VPCs also. Cloud NAT lets certain resources in Google Cloud create outbound connections to the internet or to other Virtual Private Cloud (VPC) networks. https://cloud.google.com/nat/docs/overview
upvoted 2 times
...
...
...
NaikMN
1 year, 4 months ago
Selected Answer: C https://cloud.google.com/firewall/docs/firewall-policies-examples
upvoted 1 times
...
MisterHairy
1 year, 4 months ago
Selected Answer: C
The correct answer is C. 1. Attach external IP addresses to the VMs in scope. 2. Define and apply a hierarchical firewall policy on folder level to deny all egress connections and to allow egress to IP range 10.58.5.0/24 from network dev-vpc. This approach allows you to control network traffic at the folder level. By attaching external IP addresses to the VMs in scope, you can ensure that the VMs have a unique, routable IP address for outbound connections. Then, by defining and applying a hierarchical firewall policy at the folder level, you can enforce that egress connections are limited to the specified IP range and only from the specified VPC network.
upvoted 1 times
...

Question 242

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 242 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 242
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your Google Cloud environment has one organization node, one folder named “Apps”, and several projects within that folder. The organizational node enforces the constraints/iam.allowedPolicyMemberDomains organization policy, which allows members from the terramearth.com organization. The “Apps” folder enforces the constraints/iam.allowedPolicyMemberDomains organization policy, which allows members from the flowlogistic.com organization. It also has the inheritFromParent: false property.

You attempt to grant access to a project in the “Apps” folder to the user [email protected].

What is the result of your action and why?

  • A. The action succeeds because members from both organizations, terramearth.com or flowlogistic.com, are allowed on projects in the “Apps” folder.
  • B. The action succeeds and the new member is successfully added to the project's Identity and Access Management (IAM) policy because all policies are inherited by underlying folders and projects.
  • C. The action fails because a constraints/iam.allowedPolicyMemberDomains organization policy must be defined on the current project to deactivate the constraint temporarily.
  • D. The action fails because a constraints/iam.allowedPolicyMemberDomains organization policy is in place and only members from the flowlogistic.com organization are allowed.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Mr_MIXER007
7 months ago
Selected Answer: D
The action fails because a constraints/iam.allowedPolicyMemberDomains organization policy is in place and only members from the flowlogistic.com organization are allowed
upvoted 1 times
...
JoaquinJimenezGarcia
1 year, 4 months ago
Selected Answer: D
Will fail because of the inheritFromParent: false option. Even if the level above has the right permissions, it will not inherit into the lower levels.
upvoted 4 times
...
[Removed]
1 year, 4 months ago
Selected Answer: D
https://cloud.google.com/resource-manager/reference/rest/v1/Policy#listpolicy
upvoted 3 times
...
MisterHairy
1 year, 4 months ago
Selected Answer: D
The correct answer is D. The action fails because a constraints/iam.allowedPolicyMemberDomains organization policy is in place and only members from the flowlogistic.com organization are allowed. The inheritFromParent: false property on the “Apps” folder means that it does not inherit the organization policy from the organization node. Therefore, only the policy set at the folder level applies, which allows only members from the flowlogistic.com organization. As a result, the attempt to grant access to the user [email protected] fails because this user is not a member of the flowlogistic.com organization.
upvoted 3 times
...

Question 243

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 243 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 243
Topic #: 1
[All Professional Cloud Security Engineer Questions]

An administrative application is running on a virtual machine (VM) in a managed group at port 5601 inside a Virtual Private Cloud (VPC) instance without access to the internet currently. You want to expose the web interface at port 5601 to users and enforce authentication and authorization Google credentials.

What should you do?

  • A. Configure the bastion host with OS Login enabled and allow connection to port 5601 at VPC firewall. Log in to the bastion host from the Google Cloud console by using SSH-in-browser and then to the web application.
  • B. Modify the VPC routing with the default route point to the default internet gateway. Modify the VPC Firewall rule to allow access from the internet 0.0.0.0/0 to port 5601 on the application instance.
  • C. Configure Secure Shell Access (SSH) bastion host in a public network, and allow only the bastion host to connect to the application on port 5601. Use a bastion host as a jump host to connect to the application.
  • D. Configure an HTTP Load Balancing instance that points to the managed group with Identity-Aware Proxy (IAP) protection with Google credentials. Modify the VPC firewall to allow access from IAP network range.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
glb2
6 months, 3 weeks ago
Selected Answer: D
D. Configuring an HTTP Load Balancing instance with Identity-Aware Proxy (IAP) protection ensures that access to the web interface at port 5601 is authenticated and authorized using Google credentials. IAP verifies user identity before allowing access to the backend service.
upvoted 2 times
...
PhuocT
7 months, 3 weeks ago
Selected Answer: D
D is the answer
upvoted 1 times
...
mjcts
8 months ago
Selected Answer: B
The only viable option
upvoted 1 times
PhuocT
7 months, 3 weeks ago
How B could enforce authentication and authorization Google credentials?
upvoted 1 times
...
...
MisterHairy
10 months, 3 weeks ago
Selected Answer: D
The correct answer is D. Configure an HTTP Load Balancing instance that points to the managed group with Identity-Aware Proxy (IAP) protection with Google credentials. Modify the VPC firewall to allow access from IAP network range. This approach allows you to expose the web interface securely by using Identity-Aware Proxy (IAP), which provides authentication and authorization with Google credentials. The HTTP Load Balancer can distribute traffic to the VMs in the managed group, and the VPC firewall rule ensures that access is allowed from the IAP network range.
upvoted 1 times
...

Question 244

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 244 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 244
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your company’s users access data in a BigQuery table. You want to ensure they can only access the data during working hours.

What should you do?

  • A. Assign a BigQuery Data Viewer role along with an IAM condition that limits the access to specified working hours.
  • B. Run a gsutil script that assigns a BigQuery Data Viewer role, and remove it only during the specified working hours.
  • C. Assign a BigQuery Data Viewer role to a service account that adds and removes the users daily during the specified working hours.
  • D. Configure Cloud Scheduler so that it triggers a Cloud Functions instance that modifies the organizational policy constraint for BigQuery during the specified working hours.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
MoAk
4 months, 2 weeks ago
Selected Answer: A
https://cloud.google.com/iam/docs/configuring-temporary-access#iam-conditions-expirable-access-gcloud
upvoted 1 times
...
Sundar_Pichai
7 months, 2 weeks ago
Anyone else take the exam recently? Handful of new questions or all new?
upvoted 1 times
...
dat987
7 months, 2 weeks ago
It's been a year since the last update, hopefully there will be an update soon
upvoted 1 times
...
laxman94
7 months, 3 weeks ago
exam version change due to that most of question coming from different is this updated question?
upvoted 1 times
...
Akso
8 months, 2 weeks ago
I have just passed my exam, but only 10 questions were from here... I really like this community based exam preparation, but this time i was surprised how invalid the dump is.
upvoted 2 times
...
Bettoxicity
1 year ago
Selected Answer: D
-Cloud Scheduler: Set up a Cloud Scheduler job that triggers a Cloud Function at specific times corresponding to your desired working hours. -Cloud Function: Create a Cloud Function that modifies the BigQuery organizational policy constraint. During working hours, the function allows access. Outside working hours, it restricts access.
upvoted 1 times
...
glb2
1 year ago
Selected Answer: A
A. Correct answer.
upvoted 1 times
...
NaikMN
1 year, 4 months ago
Select A, https://cloud.google.com/iam/docs/conditions-overview
upvoted 2 times
...
MisterHairy
1 year, 4 months ago
Selected Answer: A
The correct answer is A. Assign a BigQuery Data Viewer role along with an IAM condition that limits the access to specified working hours. IAM conditions in Google Cloud can be used to fine-tune access control according to attributes like time, date, and IP address. In this case, you can create an IAM condition that allows access only during working hours. This condition can be attached to the BigQuery Data Viewer role, ensuring that users can only access the data in the BigQuery table during the specified times.
upvoted 3 times
...

Question 245

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 245 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 245
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You have placed several Compute Engine instances in a private subnet. You want to allow these instances to access Google Cloud services, like Cloud Storage, without traversing the internet. What should you do?

  • A. Enable Private Google Access for the private subnet.
  • B. Configure Private Service Connect for the private subnet's Virtual Private Cloud (VPC) and allocate an IP range for the Compute Engine instances.
  • C. Reserve and assign static external IP addresses for the Compute Engine instances.
  • D. Create a Cloud NAT gateway for the region where the private subnet is configured.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Mr_MIXER007
7 months ago
Selected Answer: A
The correct answer is: A. Enable Private Google Access for the private subnet. Reasoning: Private Google Access: This feature allows instances in a private subnet to reach Google APIs and services without using their public IP addresses. This is the most direct and recommended way to achieve your goal.
upvoted 2 times
...
brunolopes07
7 months, 1 week ago
New exam questions !
upvoted 2 times
...

Question 246

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 246 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 246
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization relies heavily on Cloud Run for its containerized applications. You utilize Cloud Build for image creation, Artifact Registry for image storage, and Cloud Run for deployment. You must ensure that containers with vulnerabilities rated above a common vulnerability scoring system (CVSS) score of "medium" are not deployed to production. What should you do?

  • A. Implement vulnerability scanning as part of the Cloud Build process. If any medium or higher vulnerabilities are detected, manually rebuild the image with updated components.
  • B. Perform manual vulnerability checks post-build, but before Cloud Run deployment. Implement a manual security-engineer-driven remediation process.
  • C. Configure Binary Authorization on Cloud Run to enforce image signatures. Create policies to allow deployment only for images passing a defined vulnerability threshold.
  • D. Utilize a vulnerability scanner during the Cloud Build stage and set Artifact Registry permissions to block images containing vulnerabilities above "medium."
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
JohnDohertyDoe
3 months, 1 week ago
Selected Answer: C
https://cloud.google.com/binary-authorization/docs/run/enabling-binauthz-cloud-run
upvoted 1 times
...
Mr_MIXER007
7 months ago
Selected Answer: C
The best solution is C. Configure Binary Authorization on Cloud Run to enforce image signatures. Create policies to allow deployment only for images passing a defined vulnerability threshold. Here's why this is the preferred approach: Binary Authorization: Provides a strong, policy-based control mechanism for deploying containers. It ensures only trusted and verified images can be deployed to Cloud Run. Vulnerability Threshold: By setting a policy within Binary Authorization, you can explicitly block the deployment of any container images that have vulnerabilities exceeding a CVSS score of "medium". Automation: This approach enables automated enforcement of security standards at the deployment stage, preventing vulnerable images from reaching production.
upvoted 2 times
...
yokoyan
7 months, 1 week ago
Selected Answer: C
I think it's C.
upvoted 3 times
...

Question 247

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 247 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 247
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You run a web application on top of Cloud Run that is exposed to the internet with an Application Load Balancer. You want to ensure that only privileged users from your organization can access the application. The proposed solution must support browser access with single sign-on. What should you do?

  • A. Change Cloud Run configuration to require authentication. Assign the role of Cloud Run Invoker to the group of privileged users.
  • B. Create a group of privileged users in Cloud Identity. Assign the role of Cloud Run User to the group directly on the Cloud Run service.
  • C. Change the Ingress Control configuration of Cloud Run to internal and create firewall rules to allow only access from known IP addresses.
  • D. Activate Identity-Aware Proxy (IAP) on the Application Load Balancer backend. Assign the role of IAP-secured Web App User to the group of privileged users.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Mr_MIXER007
7 months ago
Selected Answer: D
The correct answer is D. Activate Identity-Aware Proxy (IAP) on the Application Load Balancer backend. Assign the role of IAP-secured Web App User to the group of privileged users. Here's why: IAP for Authentication and Authorization: IAP provides a centralized way to control access to your Cloud Run service, ensuring that only authenticated users can reach it. It integrates seamlessly with Cloud Identity for user management and supports single sign-on (SSO) for a smooth user experience. Role-Based Access Control: By assigning the IAP-secured Web App User role to the group of privileged users, you can precisely control who has access to the application.
upvoted 2 times
...
1e22522
7 months ago
Selected Answer: D
should be D
upvoted 1 times
...
yokoyan
7 months, 1 week ago
Selected Answer: D
I think it's D.
upvoted 1 times
...

Question 248

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 248 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 248
Topic #: 1
[All Professional Cloud Security Engineer Questions]

During a routine security review, your team discovered a suspicious login attempt to impersonate a highly privileged but regularly used service account by an unknown IP address. You need to effectively investigate in order to respond to this potential security incident. What should you do?

  • A. Enable Cloud Audit Logs for the resources that the service account interacts with. Review the logs for further evidence of unauthorized activity.
  • B. Review Cloud Audit Logs for activity related to the service account. Focus on the time period of the suspicious login attempt.
  • C. Run a vulnerability scan to identify potentially exploitable weaknesses in systems that use the service account.
  • D. Check Event Threat Detection in Security Command Center for any related alerts. Cross-reference your findings with Cloud Audit Logs.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
BPzen
4 months, 1 week ago
Selected Answer: D
Event Threat Detection (ETD) in Security Command Center (SCC): ETD automatically detects suspicious activity, such as anomalous service account usage or potential credential compromise, by analyzing logs in near real-time. Checking ETD alerts can quickly surface relevant insights about the suspicious activity. Cloud Audit Logs: Cross-referencing findings in ETD with Cloud Audit Logs helps confirm the scope of the incident by providing a complete history of actions performed by the service account, including the time of the suspicious login attempt.
upvoted 1 times
...
dv1
5 months, 3 weeks ago
Selected Answer: B
Question does not say that SCC is enabled, does it?
upvoted 3 times
KLei
5 months ago
" need to effectively investigate in order to respond to this potential security incident"
upvoted 2 times
...
...
Mr_MIXER007
7 months ago
Selected Answer: D
Selected Answer: D
upvoted 1 times
...
1e22522
7 months ago
Selected Answer: D
D. Check Event Threat Detection in Security Command Center for any related alerts. Cross-reference your findings with Cloud Audit Logs. Explanation: Security Command Center (SCC) is Google Cloud's security and risk management platform. Event Threat Detection within SCC is specifically designed to detect suspicious activity, such as unauthorized logins, and generates alerts based on predefined threat patterns. This tool would help you quickly identify if the suspicious login attempt is part of a known threat pattern. After checking for alerts in Event Threat Detection, cross-referencing with Cloud Audit Logs will give you detailed insights into the actions performed by the service account, allowing you to investigate the extent of any potential breach.
upvoted 2 times
...
yokoyan
7 months, 1 week ago
Selected Answer: D
I think it's D.
upvoted 1 times
...

Question 249

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 249 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 249
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization has an operational image classification model running on a managed AI service on Google Cloud. You are in a configuration review with stakeholders and must describe the security responsibilities for the image classification model. What should you do?

  • A. Explain that using platform-as-a-service (PaaS) transfers security concerns to Google. Describe the need for strict API usage limits to protect against unexpected usage and billing spikes.
  • B. Explain the security aspects of the code that transforms user-uploaded images using Google's service. Define Cloud IAM for fine-grained access control within the development team.
  • C. Explain Google's shared responsibility model. Focus the configuration review on Identity and Access Management (IAM) permissions, secure data upload/download procedures, and monitoring logs for any potential malicious activity.
  • D. Explain the development of custom network firewalls around the image classification service for deep intrusion detection and prevention. Describe vulnerability scanning tools for known vulnerabilities.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
JohnDohertyDoe
3 months, 1 week ago
Selected Answer: C
https://cloud.google.com/vertex-ai/docs/shared-responsibility
upvoted 1 times
...
Mr_MIXER007
7 months ago
Selected Answer: C
The most appropriate approach is C.
upvoted 2 times
...
yokoyan
7 months, 1 week ago
Selected Answer: C
I think it's C.
upvoted 1 times
...

Question 250

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 250 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 250
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are managing data in your organization's Cloud Storage buckets and are required to retain objects. To reduce storage costs, you must automatically downgrade the storage class of objects older than 365 days to Coldline storage. What should you do?

  • A. Use Cloud Asset Inventory to generate a report of the configuration of all storage buckets. Examine the Lifecycle management policy settings and ensure that they are set correctly.
  • B. Set up a CloudRun Job with Cloud Scheduler to execute a script that searches for and removes flies older than 365 days from your Cloud Storage.
  • C. Enable the Autoclass feature to manage all aspects of bucket storage classes.
  • D. Define a lifecycle policy JSON with an action on SetStorageClass to COLDLINE with an age condition of 365 and matchStorageClass STANDARD.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Pime13
4 months ago
Selected Answer: D
D. Define a lifecycle policy JSON with an action on SetStorageClass to COLDLINE with an age condition of 365 and matchStorageClass STANDARD.
upvoted 1 times
...
BPzen
4 months, 1 week ago
Selected Answer: D
Create a lifecycle policy JSON: Specify an action (SetStorageClass) to move objects to COLDLINE storage. Include a condition (age) to apply the policy to objects older than 365 days. Use the matchStorageClass parameter to apply the policy only to objects currently in STANDARD storage, ensuring that objects already in lower-cost classes (e.g., COLDLINE or ARCHIVE) are not unnecessarily moved.
upvoted 1 times
...
Mr_MIXER007
7 months ago
Selected Answer: D
D. Define a lifecycle policy JSON with an action on SetStorageClass to COLDLINE with an age condition of 365 and matchStorageClass STANDARD.
upvoted 1 times
...
1e22522
7 months ago
Selected Answer: D
its D i think
upvoted 1 times
...
brunolopes07
7 months ago
I think D is correct.
upvoted 1 times
...

Question 251

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 251 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 251
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization has a centralized identity provider that is used to manage human and machine access. You want to leverage this existing identity management system to enable on-premises applications to access Google Cloud without hard coded credentials. What should you do?

  • A. Enable Secure Web Proxy. Create a proxy subnet for each region that Secure Web Proxy will be deployed. Deploy an SSL certificate to Certificate Manager. Create a Secure Web Proxy policy and rules that allow access to Google Cloud services.
  • B. Enable Workforce Identity Federation. Create a workforce identity pool and specify the on-premises identity provider as a workforce identity pool provider. Create an attribute mapping to map the on-premises identity provider token to a Google STS token. Create an IAM binding that binds the required role(s) to the external identity by specifying the project ID, workload identity pool, and attribute that should be matched.
  • C. Enable Identity-Aware Proxy (IAP). Configure IAP by specifying the groups and service accounts that should have access to the application. Grant these identities the IAP-secured web app user role.
  • D. Enable Workload Identity Federation. Create a workload identity pool and specify the on-premises identity provider as a workload identity pool provider. Create an attribute mapping to map the on-premises identity provider token to a Google STS token. Create a service account with the necessary permissions for the workload. Grant the external identity the Workload Identity user role on the service account.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
nah99
4 months, 2 weeks ago
Selected Answer: D
The requirement of the question is for applications, not persons. So D.
upvoted 1 times
...
eychdee
5 months, 3 weeks ago
its B. keyword is workforce and not workload
upvoted 1 times
...
Art
5 months, 4 weeks ago
Selected Answer: D
It's D "You want to leverage this existing identity management system to enable on-premises applications to access Google Cloud without hard coded credentials" Workload Identity Federation is used for applications when Workforce Identity Federation is used for humans
upvoted 4 times
MoAk
4 months, 2 weeks ago
This is the best explanation if anyone still not sure.
upvoted 1 times
...
...
d0fa7d5
7 months ago
Selected Answer: D
“Since it mentions ‘on-premises applications,’ I believe the correct answer is D, not B.”
upvoted 4 times
...
1e22522
7 months ago
Selected Answer: D
Im pretty sure its D
upvoted 1 times
1e22522
7 months ago
I am wrong its B
upvoted 1 times
...
...
yokoyan
7 months, 1 week ago
Selected Answer: B
I think it's B.
upvoted 2 times
KLei
5 months ago
Workload Identity Federation allows applications running outside of Google Cloud (like on-premises systems) to authenticate to Google Cloud services using tokens from an existing identity provider without needing to manage or deploy long-lived credentials.
upvoted 2 times
yokoyan
4 months, 2 weeks ago
Workforce Identity : https://cloud.google.com/iam/docs/workforce-identity-federation#what_is_workforce_identity_federation Workload Identity : https://cloud.google.com/iam/docs/workload-identity-federation Yes, in this question we want to grant access to the application, so D might be the correct answer! Thanks!
upvoted 1 times
...
...
...

Question 252

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 252 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 252
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization is migrating a sensitive data processing workflow from on-premises infrastructure to Google Cloud. This workflow involves the collection, storage, and analysis of customer information that includes personally identifiable information (PII). You need to design security measures to mitigate the risk of data exfiltration in this new cloud environment. What should you do?

  • A. Encrypt all sensitive data in transit and at rest. Establish secure communication channels by using TLS and HTTPS protocols.
  • B. Implement a Cloud DLP solution to scan and identify sensitive information, and apply redaction or masking techniques to the PII. Integrate VPC SC with your network security controls to block potential data exfiltration attempts.
  • C. Restrict all outbound network traffic from cloud resources. Implement rigorous access controls and logging for all sensitive data and the systems that process the data.
  • D. Rely on employee expertise to prevent accidental data exfiltration incidents.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
1e22522
7 months ago
Selected Answer: B
b is just great all aroujnd
upvoted 1 times
...
yokoyan
7 months, 1 week ago
Selected Answer: B
I think it's B.
upvoted 2 times
...

Question 253

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 253 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 253
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization is building a chatbot that is powered by generative AI to deliver automated conversations with internal employees. You must ensure that no data with personally identifiable information (PII) is communicated through the chatbot. What should you do?

  • A. Encrypt data at rest for both input and output by using Cloud KMS, and apply least privilege access to the encryption keys.
  • B. Discover and transform PII data in both input and output by using the Cloud Data Loss Prevention (Cloud DLP) API.
  • C. Prevent PII data exfiltration by using VPC-SC to create a safe scope around your chatbot.
  • D. Scan both input and output by using data encryption tools from the Google Cloud Marketplace.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
nah99
4 months, 2 weeks ago
Selected Answer: B
https://cloud.google.com/blog/topics/developers-practitioners/how-keep-sensitive-data-out-your-chatbots
upvoted 1 times
...
1e22522
7 months ago
Selected Answer: B
its B yokoyan is just right all the time
upvoted 1 times
...
yokoyan
7 months, 1 week ago
Selected Answer: B
I think it's B.
upvoted 2 times
...

Question 254

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 254 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 254
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization has applications that run in multiple clouds. The applications require access to a Google Cloud resource running in your project. You must use short-lived access credentials to maintain security across the clouds. What should you do?

  • A. Create a managed workload identity. Bind an attested identity to the Compute Engine workload.
  • B. Create a service account key. Download the key to each application that requires access to the Google Cloud resource.
  • C. Create a workload identity pool with a workload identity provider for each external cloud. Set up a service account and add an IAM binding for impersonation.
  • D. Create a VPC firewall rule for ingress traffic with an allowlist of the IP ranges of the external cloud applications.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Pime13
4 months ago
Selected Answer: C
Why Option C: Short-Lived Credentials: Workload Identity Federation allows you to use short-lived credentials, which are more secure than long-lived service account keys. Cross-Cloud Compatibility: By creating a workload identity pool and providers for each external cloud, you can securely authenticate and authorize applications running in different cloud environments. IAM Binding for Impersonation: This setup allows you to grant specific permissions to the service account, ensuring that only authorized actions are performed.
upvoted 1 times
...
BPzen
4 months, 1 week ago
Selected Answer: C
For applications running in multiple clouds that need access to Google Cloud resources, the Workload Identity Federation feature is the most secure and scalable solution. It allows you to grant external workloads access to Google Cloud resources using short-lived credentials, eliminating the need to manage long-lived service account keys. Workload Identity Pool: Create a pool to represent identities from external clouds. Workload Identity Provider: Set up a provider for each external cloud to validate identities from those environments. Short-Lived Credentials: Use Google’s Security Token Service (STS) to exchange tokens from external identity providers for short-lived Google Cloud credentials. Service Account Impersonation: Set up a Google Cloud service account with the required permissions. Add an IAM binding to allow the external identity to impersonate the service account.
upvoted 1 times
...
koo_kai
6 months ago
Selected Answer: C
It"s C
upvoted 2 times
...
1e22522
7 months ago
Selected Answer: C
It's C
upvoted 2 times
...
SQLbox
7 months ago
C is the correct answer
upvoted 2 times
...
ABotha
7 months, 1 week ago
Correct Answer: C Short-lived access credentials: Workload Identity Federation (WIF) allows you to issue short-lived access tokens to external applications, reducing the risk of credential theft and misuse. Multiple clouds: You can create a workload identity pool for each external cloud, allowing applications from different environments to access your Google Cloud resources securely. Centralized management: WIF provides a centralized way to manage access to your Google Cloud resources, simplifying administration and improving security. Impersonation: By setting up a service account and adding an IAM binding for impersonation, you can allow external applications to act as the service account, granting them the necessary permissions to access your Google Cloud resources.
upvoted 4 times
...
yokoyan
7 months, 1 week ago
Selected Answer: A
I think it's A.
upvoted 1 times
yokoyan
4 months, 2 weeks ago
After reading ABotha's comment, I'm starting to think that C is correct.
upvoted 2 times
...
...

Question 255

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 255 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 255
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization's financial modeling application is already deployed on Google Cloud. The application processes large amounts of sensitive customer financial data. Application code is old and poorly understood by your current software engineers. Recent threat modeling exercises have highlighted the potential risk of sophisticated side-channel attacks against the application while the application is running. You need to further harden the Google Cloud solution to mitigate the risk of these side-channel attacks, ensuring maximum protection for the confidentiality of financial data during processing, while minimizing application problems. What should you do?

  • A. Enforce stricter access controls for Compute Engine instances by using service accounts, least privilege IAM policies, and limit network access.
  • B. Implement a runtime library designed to introduce noise and timing variations into the application's execution which will disrupt side-channel attack.
  • C. Migrate the application to Confidential VMs to provide hardware-level encryption of memory and protect sensitive data during processing.
  • D. Utilize customer-managed encryption keys (CMEK) to ensure complete control over the encryption process.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Pime13
4 months ago
Selected Answer: C
https://cloud.google.com/confidential-computing/confidential-vm/docs/confidential-vm-overview https://cloud.google.com/confidential-computing/confidential-vm/docs
upvoted 1 times
...
BondleB
5 months, 1 week ago
Selected Answer: C
Reference: https://cloud.google.com/confidential-computing/confidential-vm/docs/confidential-vm-overview https://cloud.google.com/confidential-computing/confidential-vm/docs
upvoted 1 times
BondleB
5 months, 1 week ago
Migrate application to Confidential VMs in Google Cloud to provide hardware-level encryption, this can be achieved by: 1) Creating a Confidential VM instance in a sole-tenant node 2) Encrypting a new disk and enforcing Confidential VM use 3) Creating a new node pool with Confidential GKE Nodes enabled. Confidential VMs help protect sensitive data by providing a trusted execution environment for AI workloads thereby reducing the risk of unauthorized access, even by privileged users or malicious actors within the system. Since the application processes large and sensitive data while code is old and poorly understood by the current software engineers, this makes it more prone to unsuspecting attacks considering the highlighted potential risks of sophisticated side channel attacks while the application is running.
upvoted 1 times
...
...
1e22522
7 months ago
Selected Answer: C
Should be C
upvoted 1 times
...

Question 256

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 256 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 256
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization has two VPC Service Controls service perimeters, Perimeter-A and Perimeter-B, in Google Cloud. You want to allow data to be copied from a Cloud Storage bucket in Perimeter-A to another Cloud Storage bucket in Perimeter-B. You must minimize exfiltration risk, only allow required connections, and follow the principle of least privilege. What should you do?

  • A. Configure a perimeter bridge between Perimeter-A and Perimeter-B, and specify the Cloud Storage buckets as the resources involved.
  • B. Configure a perimeter bridge between the projects hosting the Cloud Storage buckets in Perimeter-A and Perimeter-B.
  • C. Configure an egress rule for the Cloud Storage bucket in Perimeter-A and a corresponding ingress rule in Perimeter-B.
  • D. Configure a bidirectional egress/ingress rule for the Cloud Storage buckets in Perimeter-A and Perimeter-B.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
YourFriendlyNeighborhoodSpider
3 weeks, 3 days ago
Selected Answer: C
"minimize exfiltration risk, only allow required connections, and follow the principle of least privilege" - C follow the principle of least privilege While a perimeter bridge allows communication between two service perimeters, it may grant broader access than necessary and does not adhere to the principle of least privilege, as it could expose resources to more connections than intended.
upvoted 1 times
...
KLei
3 months, 3 weeks ago
Selected Answer: C
"minimize exfiltration risk, only allow required connections, and follow the principle of least privilege" - C follow the principle of least privilege
upvoted 2 times
KLei
3 months, 3 weeks ago
While a perimeter bridge allows communication between two service perimeters, it may grant broader access than necessary and does not adhere to the principle of least privilege, as it could expose resources to more connections than intended.
upvoted 1 times
...
...
Pime13
4 months ago
Selected Answer: A
https://cloud.google.com/vpc-service-controls/docs/share-across-perimeters#example_of_perimeter_bridges
upvoted 1 times
...
cachopo
4 months ago
Selected Answer: A
A perimeter bridge allows limited communication between resources in two service perimeters. By explicitly specifying the Cloud Storage buckets involved, you restrict the scope of the bridge to only the required resources. While egress and ingress rules control data flow, they are typically used for access to services outside the perimeters, not between two perimeters. Additionally, this approach lacks granularity and risks unintended exposure.
upvoted 1 times
cachopo
4 months ago
Also, this is pretty similar to the example exposed in the documentation: https://cloud.google.com/vpc-service-controls/docs/share-across-perimeters#example_of_perimeter_bridges
upvoted 1 times
...
...
BPzen
4 months, 2 weeks ago
Selected Answer: A
To enable data transfer between two VPC Service Controls service perimeters while minimizing exfiltration risk and adhering to the principle of least privilege, you need to use a perimeter bridge. This bridge allows controlled communication between the two perimeters but must be configured to include only the specific resources (in this case, the Cloud Storage buckets). Here's why the other options are less suitable: A perimeter bridge between projects is overly broad and does not align with the principle of least privilege. It would allow communication for all resources in the projects, increasing the risk of exfiltration. C. Configure an egress rule for the Cloud Storage bucket in Perimeter-A and a corresponding ingress rule in Perimeter-B. VPC Service Controls do not directly support simple egress/ingress rules between perimeters. Perimeter bridges are the designed mechanism for controlled inter-perimeter communication.
upvoted 1 times
...
nah99
4 months, 2 weeks ago
Selected Answer: C
https://cloud.google.com/vpc-service-controls/docs/ingress-egress-rules
upvoted 2 times
...
MoAk
4 months, 3 weeks ago
Selected Answer: A
Looks like this chat has been infiltrated. Clearly the correct answer is A. this exact feature exists for this use case.
upvoted 2 times
nah99
4 months, 2 weeks ago
Nope, C is better. "Ingress and egress rules can replace and simplify use cases that previously required one or more perimeter bridges." "Minimize exfiltration risk by constraining the exact service, methods, Google Cloud projects, VPC networks, and identities used to execute the data exchange." https://cloud.google.com/vpc-service-controls/docs/ingress-egress-rules
upvoted 2 times
MoAk
4 months, 2 weeks ago
This is the way. Thanks :) "Ingress and egress rules can replace and simplify use cases that previously required one or more perimeter bridges." Answer C
upvoted 1 times
...
...
...
jmaquino
5 months ago
Selected Answer: C
C: Data exchange between clients and resources separated by perimeters is secured by using ingress and egress rules. https://cloud.google.com/vpc-service-controls/docs/overview
upvoted 2 times
...
BondleB
5 months, 1 week ago
Selected Answer: C
C
upvoted 2 times
...
d0fa7d5
7 months ago
Selected Answer: A
I think B is too broad in scope.
upvoted 4 times
...
BB_norway
7 months ago
Selected Answer: C
It should be C, due to the offered granular control and principle of least priviledge
upvoted 4 times
...
yokoyan
7 months, 1 week ago
Selected Answer: B
I think it's B.
upvoted 1 times
...

Question 257

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 257 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 257
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are running code in Google Kubernetes Engine (GKE) containers in Google Cloud that require access to objects stored in a Cloud Storage bucket. You need to securely grant the Pods access to the bucket while minimizing management overhead. What should you do?

  • A. Create a service account. Grant bucket access to the Pods by using Workload Identity Federation for GKE.
  • B. Create a service account with keys. Store the keys in Secret Manager with a 30-day rotation schedule. Reference the keys in the Pods.
  • C. Create a service account with keys. Store the keys as a Kubernetes secret. Reference the keys in the Pods.
  • D. Create a service account with keys. Store the keys in Secret Manager. Reference the keys in the Pods.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
jmaquino
5 months ago
Selected Answer: A
A: Workload Identity Federation for GKE is the recommended way for your workloads running on Google Kubernetes Engine (GKE) to access Google Cloud services in a secure and manageable way. https://cloud.google.com/kubernetes-engine/docs/concepts/workload-identity
upvoted 1 times
...
1e22522
7 months ago
Selected Answer: A
It's A i thikn
upvoted 1 times
...
yokoyan
7 months, 1 week ago
Selected Answer: A
I think it's A.
upvoted 1 times
...

Question 258

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 258 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 258
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization is adopting Google Cloud and wants to ensure sensitive resources are only accessible from devices within the internal on-premises corporate network. You must configure Access Context Manager to enforce this requirement. These considerations apply:

• The internal network uses IP ranges 10.100.0.0/16 and 192.168.0.0/16.
• Some employees work remotely but connect securely through a company-managed virtual private network (VPN). The VPN dynamically allocates IP addresses from the pool 172.16.0.0/20.
• Access should be restricted to a specific Google Cloud project that is contained within an existing service perimeter.

What should you do?

  • A. Create an access level named "Authorized Devices." Utilize the Device Policy attribute to require corporate-managed devices. Apply the access level to the Google Cloud project and instruct all employees to enroll their devices in the organization's management system.
  • B. Create an access level titled "Internal Network Only." Add a condition with these attributes:
    • IP Subnetworks: 10.100.0.0/16, 192.168.0.0/16
    • Device Policy: Require OS as Windows or macOS. Apply this access level to the sensitive Google Cloud project.
  • C. Create an access level titled "Corporate Access." Add a condition with the IP Subnetworks attribute, including the ranges: 10.100.0.0/16, 192.168.0.0/16, 172.16.0.0/20. Assign this access level to a service perimeter encompassing the sensitive project.
  • D. Create a new IAM role called "InternalAccess. Add the IP ranges 10.100.0.0/16, 192.16.0.0/16, and 172.16.0.0/20 to the role as an IAM condition. Assign this role to IAM groups corresponding to on-premises and VPN users. Grant this role the necessary permissions on the resource within this sensitive Google Cloud project.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
nah99
4 months, 2 weeks ago
Selected Answer: C
https://cloud.google.com/access-context-manager/docs/overview#ip-address
upvoted 2 times
...
BondleB
5 months, 1 week ago
Selected Answer: C
The recommended approach is to configure Access Context Manager to create access levels incorporating the specified IP ranges (10.100.0.0/16, 192.168.0.0/16, and 172.16.0.0/20) and apply this access level to the existing service perimeter containing the sensitive resources. This method leverages Google Cloud’s built-in security features to enforce network-based access controls effectively and provides better security and compliance for the sensitive resources.
upvoted 2 times
...
yokoyan
7 months, 1 week ago
Selected Answer: C
I think it's C.
upvoted 1 times
...

Question 259

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 259 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 259
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your team maintains 1PB of sensitive data within BigOuery that contains personally identifiable information (PII). You need to provide access to this dataset to another team within your organization for analysis purposes. You must share the BigQuery dataset with the other team while protecting the PII. What should you do?

  • A. Utilize BigQuery's row-level access policies to mask PII columns based on the other team's user identities.
  • B. Export the BigQuery dataset to Cloud Storage. Create a VPC Service Control perimeter and allow only their team's project access to the bucket.
  • C. Implement data pseudonymization techniques to replace the PII fields with non-identifiable values. Grant the other team access to the pseudonymized dataset.
  • D. Create a filtered copy of the dataset and replace the sensitive data with hash values in a separate project. Grant the other team access to this new project.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Pime13
4 months ago
Selected Answer: C
Why Option C? Data Protection: Pseudonymization replaces PII with non-identifiable values, ensuring that sensitive information is protected while still allowing the other team to perform their analysis. Compliance: This approach helps in complying with data protection regulations by minimizing the risk of exposing PII. Usability: The other team can access and analyze the dataset without compromising the privacy of the individuals whose data is included Why not A?
upvoted 1 times
LegoJesus
2 months ago
The question starts with "Your team maintains 1 Peta Byte of data in bigquery". That's a lot of data. If you go with option C, you either: - De-identify the sensitive information in the original dataset, rendering this table and the info in it useless for the original team that uses it. - Clone the entire dataset (another 1PB), de-indentify the sensitive data and grant access to the other team. So obivously A is the better answer here, because the PII is still needed, just can't share it with other teams.
upvoted 1 times
...
Pime13
4 months ago
Option A suggests using BigQuery's row-level access policies to mask PII columns based on the other team's user identities. Granularity of Protection: Row-level access policies are useful for controlling access to specific rows based on user identities, but they may not be as effective for masking or protecting specific columns containing PII. This approach might not fully anonymize the data, leaving some sensitive information potentially exposed. Complexity and Maintenance: Implementing and maintaining row-level access policies can be complex, especially if the dataset is large and the access requirements are detailed. This can lead to increased administrative overhead. Pseudonymization Benefits: Pseudonymization (option C) ensures that PII is replaced with non-identifiable values, providing a higher level of data protection. This method is more straightforward and ensures that the other team can work with the data without risking exposure of sensitive information.
upvoted 1 times
Pime13
4 months ago
https://cloud.google.com/blog/products/identity-security/how-to-use-google-cloud-to-find-and-protect-pii https://cloud.google.com/sensitive-data-protection/docs/dlp-bigquery
upvoted 1 times
...
...
...
cachopo
4 months ago
Selected Answer: A
Option A is the best approach because it allows you to implement fine-grained, secure access directly within BigQuery without needing to duplicate or transform the dataset. By using row-level access policies and column masking, you can efficiently protect the PII while enabling the other team to analyze the non-sensitive portions of the data.
upvoted 1 times
...
nah99
4 months, 2 weeks ago
Selected Answer: A
A. https://cloud.google.com/bigquery/docs/row-level-security-intro
upvoted 1 times
...
KLei
5 months ago
Selected Answer: A
A provides less footprint to solve the problem.
upvoted 1 times
...
jmaquino
5 months ago
Selected Answer: A
Example: https://cloud.google.com/bigquery/docs/row-level-security-intro?hl=es-419#filter_row_data_based_on_region
upvoted 2 times
...
jmaquino
5 months ago
Selected Answer: A
Sorry: A: I disagree with answer C. Row-level security allows you to filter data and enable access to specific rows in a table, based on eligible user conditions. Row-level security allows a data owner or administrator to implement policies, such as “Team Users.” https://cloud.google.com/bigquery/docs/row-level-security-intro?hl=en-US
upvoted 2 times
KLei
5 months ago
yes, "replace" the original data is wrong. we need somewhere to keep the true copy of data. If copy to another target and then replace the PII then it is OK. But saying 1PB data, it is time consuming for the copy operation and high BQ cost. C is not a good option.
upvoted 1 times
nah99
4 months, 2 weeks ago
True, they included the 1PB to make C blatantly worse
upvoted 1 times
...
...
...
jmaquino
5 months ago
Selected Answer: C
A: I disagree with answer C. Row-level security allows you to filter data and enable access to specific rows in a table, based on eligible user conditions. Row-level security allows a data owner or administrator to implement policies, such as “Team Users.” https://cloud.google.com/bigquery/docs/row-level-security-intro?hl=en-US
upvoted 1 times
KLei
5 months ago
so your answer should be A. My answer is A
upvoted 1 times
...
...
yokoyan
7 months, 1 week ago
Selected Answer: C
I think it's C.
upvoted 2 times
KLei
5 months ago
replacing the original PII values in the BQ? so where is the original true copy of data?
upvoted 1 times
...
...

Question 260

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 260 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 260
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization uses Google Cloud to process large amounts of location data for analysis and visualization. The location data is potentially sensitive. You must design a solution that allows storing and processing the location data securely, minimizing data exposure risks, and adhering to both regulatory guidelines and your organization's internal data residency policies. What should you do?

  • A. Enable location restrictions on Compute Engine instances and virtual disk resources where the data is handled. Apply labels to tag geographic metadata for all stored data.
  • B. Use the Cloud Data Loss Prevention (Cloud DLP) API to scan for sensitive location data before any storage or processing. Create Cloud Storage buckets with global availability for optimal performance, relying on Cloud DLP results to filter and control data access.
  • C. Create regional Cloud Storage buckets with Object Lifecycle Management policies that limit data lifetime. Enable fine-grained access controls by using IAM conditions. Encrypt data with customer-managed encryption keys (CMEK) generated within specific Cloud KMS key locations.
  • D. Store data within BigQuery in a specified region by using dataset location configuration. Use authorized views and row-level security to enforce geographic access restrictions. Encrypt data within BigQuery tables by using customer-managed encryption keys (CMEK).
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
yokoyan
Highly Voted 7 months, 1 week ago
Selected Answer: D
I think it's D.
upvoted 5 times
...
MoAk
Most Recent 4 months, 2 weeks ago
Selected Answer: D
Key word in the Q to look out for... analysis of data. Analysis of data typically = BQ required
upvoted 2 times
...
nah99
4 months, 2 weeks ago
Selected Answer: D
BigQuery
upvoted 1 times
...
KLei
5 months ago
Selected Answer: D
Originally A, but this "process large amounts of location data for analysis and visualization" makes me choose D. BQ is the best data store for analysis and visualization. I think.
upvoted 2 times
...

Question 261

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 261 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 261
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization utilizes Cloud Run services within multiple projects underneath the non-production folder which requires primarily internal communication. Some services need external access to approved fully qualified domain names (FQDN) while other external traffic must be blocked. Internal applications must not be exposed. You must achieve this granular control with allowlists overriding broader restrictions only for designated VPCs. What should you do?

  • A. Implement a global-level allowlist rule for the necessary FQDNs within a hierarchical firewall policy. Apply this policy across all VPCs in the organization and configure Cloud NAT without any additional filtering.
  • B. Create a folder-level deny-all rule for outbound traffic within a hierarchical firewall policy. Define FQDN allowlist rules in separate policies and associate them with the necessary VPCs. Configure Cloud NAT for these VPCs.
  • C. Create a project-level deny-all rule within a hierarchical structure and apply it broadly. Override this rule with separate FQDN allowlists defined in VPC-level firewall policies associated with the relevant VPCs.
  • D. Configure Cloud NAT with IP-based filtering to permit outbound traffic only to the allowlist d FQDNs' IP ranges. Apply Cloud NAT uniformly to all VPCs within the organization's folder structure.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
KLei
3 months, 2 weeks ago
Selected Answer: B
Cloud Public NAT support not only the VM instances but also Cloud Run https://cloud.google.com/nat/docs/overview#supported-resources
upvoted 1 times
...
Pime13
4 months ago
Selected Answer: B
This approach allows you to: Enforce a deny-all rule at the folder level, ensuring that no outbound traffic is allowed by default. Create specific allowlist rules for the approved FQDNs and apply these rules to the necessary VPCs, providing the required external access. Configure Cloud NAT to handle the outbound traffic for these VPCs, ensuring that the traffic is routed correctly while adhering to the allowlist rules.
upvoted 1 times
...
MoAk
4 months, 3 weeks ago
Selected Answer: B
Only answer that makes sense to me.
upvoted 1 times
...
yokoyan
7 months, 1 week ago
Selected Answer: B
I think it's B.
upvoted 1 times
...

Question 262

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 262 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 262
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization hosts a sensitive web application in Google Cloud. To protect the web application, you've set up a virtual private cloud (VPC) with dedicated subnets for the application's frontend and backend components. You must implement security controls to restrict incoming traffic, protect against web-based attacks, and monitor internal traffic. What should you do?

  • A. Configure Cloud Firewall to permit allow-listed traffic only, deploy Google Cloud Armor with predefined rules for blocking common web attacks, and deploy Cloud Intrusion Detection System (IDS) to detect internal traffic anomalies.
  • B. Configure Google Cloud Armor to allow incoming connections, configure DNS Security Extensions (DNSSEC) on Cloud DNS to secure against common web attacks, and deploy Cloud Intrusion Detection System (Cloud IDS) to detect internal traffic anomalies.
  • C. Configure Cloud Intrusion Detection System (Cloud IDS) to monitor incoming connections, deploy Identity-Aware Proxy (IAP) to block common web attacks, and deploy Google Cloud Armor to detect internal traffic anomalies.
  • D. Configure Cloud DNS to secure incoming traffic, deploy Cloud Intrusion Detection System (Cloud IDS) to detect common web attacks, and deploy Google Cloud Armor to detect internal traffic anomalies.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Pime13
4 months ago
Selected Answer: A
Here's why: Cloud Firewall: By configuring the firewall to permit only allow-listed traffic, you can restrict incoming traffic to only trusted sources, enhancing security. Google Cloud Armor: This service provides protection against common web-based attacks such as DDoS and SQL injection by using predefined rules. Cloud Intrusion Detection System (IDS): Deploying IDS helps in monitoring internal traffic for any anomalies, ensuring that any suspicious activity within the VPC is detected and addressed promptly. This combination of services provides a comprehensive security posture for your sensitive web application, addressing both external and internal threats.
upvoted 1 times
...
MoAk
4 months, 3 weeks ago
Selected Answer: A
A is good.
upvoted 1 times
...
yokoyan
7 months, 1 week ago
Selected Answer: A
I think it's A.
upvoted 2 times
...

Question 263

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 263 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 263
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization relies heavily on virtual machines (VMs) in Compute Engine. Due to team growth and resource demands, VM sprawl is becoming problematic. Maintaining consistent security hardening and timely package updates poses an increasing challenge. You need to centralize VM image management and automate the enforcement of security baselines throughout the virtual machine lifecycle. What should you do?

  • A. Use VM Manager to automatically distribute and apply patches to YMs across your projects. Integrate VM Manager with hardened, organization-standard VM images stored in a central repository.
  • B. Configure the sole-tenancy feature in Compute Engine for all projects. Set up custom organization policies in Policy Controller to restrict the operating systems and image sources that teams are allowed to use.
  • C. Create a Cloud Build trigger to build a pipeline that generates hardened VM images. Run vulnerability scans in the pipeline, and store images with passing scans in a registry. Use instance templates pointing to this registry.
  • D. Activate Security Command Center Enterprise. Use VM discovery and posture management features to monitor hardening state and trigger automatic responses upon detection of issues.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Pime13
4 months ago
Selected Answer: C
This approach ensures that: Centralized Image Management: Hardened VM images are created and stored in a central registry. Automated Security Enforcement: Vulnerability scans are run in the pipeline, ensuring that only secure images are used. Consistency: Instance templates pointing to the registry ensure that all VMs are created from the approved, secure images. Option A suggests using VM Manager to automatically distribute and apply patches to VMs across your projects and integrating VM Manager with hardened, organization-standard VM images stored in a central repository. While this approach addresses patch management and centralizes image storage, it doesn't fully automate the enforcement of security baselines throughout the VM lifecycle.
upvoted 1 times
...
BPzen
4 months, 2 weeks ago
Selected Answer: C
Explanation: VM sprawl and security hardening challenges necessitate a robust solution for centralized VM image management and automation of security baselines. Implementing a pipeline to create, validate, and distribute hardened images ensures consistency, security, and compliance throughout the VM lifecycle. While VM Manager is excellent for patch management, it does not centralize or automate the creation of hardened VM images. This solution does not address the root cause of inconsistent VM configurations caused by VM sprawl.
upvoted 1 times
...
KLei
4 months, 4 weeks ago
Selected Answer: A
VM Manager allows you to automate the management of your virtual machines, including patch management.
upvoted 1 times
...
koo_kai
6 months ago
Selected Answer: A
It's A
upvoted 1 times
...
1e22522
7 months ago
Selected Answer: A
It's A 100%
upvoted 4 times
...
SQLbox
7 months ago
A is the correct answer ,VM Manager allows you to centrally manage and automate patching, configuration management, and compliance enforcement for VMs. By integrating with hardened VM images stored in a central repository, you ensure that VMs are consistently created with security baselines and regularly updated. • This solution provides automation and central control, which addresses both the challenges of VM sprawl and the need for consistent security.
upvoted 3 times
...
yokoyan
7 months, 1 week ago
Selected Answer: C
I think it's C.
upvoted 2 times
KLei
4 months, 4 weeks ago
This option focuses on creating hardened images but does not directly address the ongoing management and patching of existing VMs. It can be part of a solution but is not as comprehensive for maintenance as VM Manager.
upvoted 2 times
yokoyan
4 months, 2 weeks ago
yes. A is correct. not C.
upvoted 1 times
...
...
...

Question 264

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 264 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 264
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Customers complain about error messages when they access your organization's website. You suspect that the web application firewall rules configured in Cloud Armor are too strict. You want to collect request logs to investigate what triggered the rules and blocked the traffic. What should you do?

  • A. Modify the Application Load Balancer backend and increase the tog sample rate to a higher number.
  • B. Enable logging in the Application Load Balancer backend and set the log level to VERBOSE in the Cloud Armor policy.
  • C. Change the configuration of suspicious web application firewall rules in the Cloud Armor policy to preview mode.
  • D. Create a log sink with a filter for togs containing redirected_by_security_policy and set a BigQuery dataset as destination.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Pime13
4 months ago
Selected Answer: B
https://cloud.google.com/armor/docs/verbose-logging You can adjust the level of detail recorded in your logs. We recommend that you enable verbose logging only when you first create a policy, make changes to a policy, or troubleshoot a policy. If you enable verbose logging, it is in effect for rules in preview mode as well as active (non-previewed) rules during standard operations.
upvoted 1 times
...
cachopo
4 months ago
Selected Answer: B
Enabling verbose logging for your Cloud Armor policy provides the most detailed logs, including information about why specific requests triggered a WAF rule. This level of detail is critical for troubleshooting and refining security policies. - Verbose logging captures detailed request attributes that caused WAF rules to trigger, which are not available in default (normal) logs. - By setting the log level to VERBOSE using the gcloud compute security-policies update command, you can collect the detailed logs needed for investigation.
upvoted 1 times
...
BPzen
4 months, 2 weeks ago
Selected Answer: C
Other Rules Still Enforced: Only the specific rules switched to preview mode are not enforced. All other active rules in the Cloud Armor policy continue to block or redirect traffic as configured. This minimizes the exposure since you're not disabling the entire firewall. B. Enable logging in the Application Load Balancer backend and set the log level to VERBOSE in the Cloud Armor policy. Cloud Armor policies do not have a "VERBOSE" log level. While enabling logging at the backend captures some information, it does not specifically provide insights into which WAF rules were triggered.
upvoted 1 times
cachopo
4 months ago
Actually, Cloud Armor does have "Verbose" log-level: https://cloud.google.com/armor/docs/verbose-logging It's okay to look for answers on Chatgpt. But try to compare the answers too because it's not foolproof.
upvoted 1 times
...
...
nah99
4 months, 2 weeks ago
Selected Answer: B
B collects the logs you want. C has the side-effect of allowing the traffic which may not be appropriate during investigation
upvoted 1 times
...
kalbd2212
4 months, 3 weeks ago
C .. This helps you pinpoint the exact rules that are causing problems and understand why they are being triggered.
upvoted 1 times
...
d0fa7d5
7 months, 1 week ago
Selected Answer: B
I thought B is the correct answer. C is useful for testing the rule, but it doesn’t provide detailed logs. With B, detailed information about which rule caused the block is recorded, which helps in investigating the cause.
upvoted 4 times
...
yokoyan
7 months, 1 week ago
Selected Answer: B
I think it's B.
upvoted 1 times
...

Question 265

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 265 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 265
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization must follow the Payment Card Industry Data Security Standard (PCI DSS). To prepare for an audit, you must detect deviations on an infrastructure-as-a-service level in your Google Cloud landing zone. What should you do?

  • A. Create a data profile covering all payment relevant data types. Configure Data Discovery and a risk analysis job in Google Cloud Sensitive Data Protection to analyze findings.
  • B. Use the Google Cloud Compliance Reports Manager to download the latest version of the PCI DSS report Analyze the report to detect deviations.
  • C. Create an Assured Workloads folder in your Google Cloud organization. Migrate existing projects into the folder and monitor for deviations in the PCI DSS.
  • D. Activate Security Command Center Premium. Use the Compliance Monitoring product to filter findings that may not be PCI DSS compliant.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
1e22522
Highly Voted 7 months ago
Selected Answer: D
It's 100% D
upvoted 5 times
...
zanhsieh
Most Recent 3 months, 3 weeks ago
Selected Answer: D
D. A: No. This option only covers the data protection. PCI-DSS has other requirements, e.g. IAM, EKM, etc. B: No. This only download the checklist of PCI-DSS items. Not reflect to the snapshot of current infra. C: No. Only address controls, no data privacy.
upvoted 1 times
...
Zek
4 months ago
Selected Answer: D
https://cloud.google.com/security-command-center/docs/compliance-management For each supported security standard, Security Command Center checks a subset of the controls. For the controls checked, Security Command Center shows you how many are passing. For the controls that are not passing, Security Command Center shows you a list of findings that describe the control failures.
upvoted 2 times
...
MoAk
4 months, 2 weeks ago
Selected Answer: D
https://cloud.google.com/security-command-center/docs/compliance-management
upvoted 1 times
...
yokoyan
7 months, 1 week ago
Selected Answer: A
I think it's A.
upvoted 1 times
...

Question 267

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 267 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 267
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization operates in a highly regulated industry and needs to implement strict controls around temporary access to sensitive Google Cloud resources. You have been using Access Approval to manage this access, but your compliance team has mandated the use of a custom signing key. Additionally, they require that the key be stored in a hardware security module (HSM) located outside Google Cloud. You need to configure Access Approval to use a custom signing key that meets the compliance requirements. What should you do?

  • A. Create a new asymmetric signing key in Cloud Key Management System (Cloud KMS) using a supported algorithm and grant the Access Approval service account the IAM signerVerifier role on the key.
  • B. Export your existing Access Approval signing key as a PEM file. Upload the file to your external HSM and reconfigure Access Approval to use the key from the HSM.
  • C. Create a signing key in your external HSM. Integrate the HSM with Cloud External Key Manager (Cloud EKM) and make the key available within your project. Configure Access Approval to use this key.
  • D. Create a new asymmetric signing key in Cloud KMS and configure the key with a rotation period of 30 days. Add the corresponding public key to your external HSM.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
JohnDohertyDoe
3 months, 1 week ago
Selected Answer: C
https://cloud.google.com/assured-workloads/access-approval/docs/review-approve-access-requests-custom-keys#select-key
upvoted 1 times
...
BondleB
5 months, 1 week ago
Selected Answer: C
Only option C fulfils the compliance requirement of custom signing key located outside google cloud.
upvoted 1 times
...
yokoyan
7 months, 1 week ago
Selected Answer: C
I think it's C.
upvoted 3 times
...

Question 268

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 268 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 268
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization has sensitive data stored in BigQuery and Cloud Storage. You need to design a solution that provides granular and flexible control authorization to read data. What should you do?

  • A. Deidentify sensitive fields within the dataset by using data leakage protection within the Sensitive Data Protection services.
  • B. Use Cloud External Key Manager (Cloud EKM) to encrypt the data in BigQuery and Cloud Storage.
  • C. Grant identity and access management (IAM) roles and permissions to principals.
  • D. Enable server-side encryption on the data in BigQuery and Cloud Storage.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Pime13
4 months ago
Selected Answer: C
Why Option C: Granular Control: IAM roles and permissions allow you to specify exactly who can access which resources, down to the level of individual datasets or tables. Flexibility: You can create custom roles and assign them to specific users, groups, or service accounts, tailoring access to your organization's needs. Security: By using IAM, you can enforce the principle of least privilege, ensuring that users have only the permissions they need. IAM roles and permissions provide the most comprehensive solution for managing access to sensitive data in BigQuery and Cloud Storage.
upvoted 1 times
...
yokoyan
7 months, 1 week ago
Selected Answer: C
I think it's C.
upvoted 1 times
...

Question 269

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 269 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 269
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization is using Security Command Center Premium as a central tool to detect and alert on security threats. You also want to alert on suspicious outbound traffic that is targeting domains of known suspicious web services. What should you do?

  • A. Create a DNS Server Policy in Cloud DNS and turn on logs. Attach this policy to all Virtual Private Cloud networks with internet connectivity.
  • B. Forward all logs to Chronicle Security Information and Event Management. Create an alert for suspicious egress traffic to the internet.
  • C. Create a Cloud Intrusion Detection endpoint. Connect this endpoint to all Virtual Private Cloud networks with internet connectivity.
  • D. Create an egress firewall policy with Threat Intelligence as the destination. Attach this policy to all Virtual Private Cloud networks with internet connectivity.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Pime13
4 months ago
Selected Answer: D
https://cloud.google.com/security-command-center/docs/concepts-security-command-center-overview#cases-overview
upvoted 1 times
...
Zek
4 months ago
Selected Answer: D
D seems right to me. https://cloud.google.com/firewall/docs/firewall-policies-rule-details#threat-intelligence-fw-policy Firewall policy rules let you secure your network by allowing or blocking traffic based on Google Threat Intelligence data. For egress rules, specify the destination by using one or more destination Google Threat Intelligence lists.
upvoted 1 times
...
cachopo
4 months ago
Selected Answer: D
The correct option is D. Since it is not necessary to send logs to Chronicle if you are already paying for SCC Premium, which can alert on any outbound traffic that triggers the Threat Intelligence firewall rule. Otherwise, I don't see any point in them explicitly telling you that you have contracted SCC Premium.
upvoted 1 times
...
MoAk
4 months, 2 weeks ago
Selected Answer: D
https://cloud.google.com/firewall/docs/firewall-policies-rule-details#threat-intelligence-fw-policy
upvoted 1 times
...
BondleB
5 months, 1 week ago
Selected Answer: B
https://cloud.google.com/chronicle/docs/overview Option B addresses the alert on suspicious outbound traffic while option D does not.
upvoted 3 times
...
sanmeow
6 months ago
Selected Answer: D
D is correct.
upvoted 1 times
...
brpjp
6 months, 3 weeks ago
Answer D is correct as per Gemini: Subscribe to threat intelligence feeds that provide updated lists of known suspicious domains and IP addresses. Integrate these feeds with your security solutions to identify and block outbound connections to these resources.
upvoted 3 times
...
Pach1211
6 months, 4 weeks ago
I´m thinking D
upvoted 2 times
...
yokoyan
7 months, 1 week ago
Selected Answer: B
I think it's B.
upvoted 1 times
...

Question 270

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 270 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 270
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You work for a healthcare provider that is expanding into the cloud to store and process sensitive patient data. You must ensure the chosen Google Cloud configuration meets these strict regulatory requirements:

• Data must reside within specific geographic regions.
• Certain administrative actions on patient data require explicit approval from designated compliance officers.
• Access to patient data must be auditable.

What should you do?

  • A. Select a standard Google Cloud region. Restrict access to patient data based on user location and job function by using Access Context Manager. Enable both Cloud Audit Logging and Access Transparency.
  • B. Deploy an Assured Workloads environment in an approved region. Configure Access Approval for sensitive operations on patient data. Enable both Cloud Audit Logs and Access Transparency.
  • C. Deploy an Assured Workloads environment in multiple regions for redundancy. Utilize custom IAM roles with granular permissions. Isolate network-level data by using VPC Service Controls.
  • D. Select multiple standard Google Cloud regions for high availability. Implement Access Control Lists (ACLs) on individual storage objects containing patient data. Enable Cloud Audit Logs.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Pime13
4 months ago
Selected Answer: B
https://cloud.google.com/assured-workloads/docs/overview
upvoted 1 times
...
BondleB
5 months, 1 week ago
Selected Answer: B
Option B fulfils the given strict regulatory requirements below: • Data must reside within specific geographic regions. • Certain administrative actions on patient data require explicit approval from designated compliance officers. • Access to patient data must be auditable.
upvoted 2 times
...
yokoyan
7 months, 1 week ago
Selected Answer: B
I think it's B.
upvoted 2 times
...

Question 271

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 271 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 271
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You work for a multinational organization that has systems deployed across multiple cloud providers, including Google Cloud. Your organization maintains an extensive on-premises security information and event management (SIEM) system. New security compliance regulations require that relevant Google Cloud logs be integrated seamlessly with the existing SIEM to provide a unified view of security events. You need to implement a solution that exports Google Cloud logs to your on-premises SIEM by using a push-based, near real-time approach. You must prioritize fault tolerance, security, and auto scaling capabilities. In particular, you must ensure that if a log delivery fails, logs are re-sent. What should you do?

  • A. Create a Pub/Sub topic for log aggregation. Write a custom Python script on a Cloud Function Leverage the Cloud Logging API to periodically pull logs from Google Cloud and forward the logs to the SIEM. Schedule the Cloud Function to run twice per day.
  • B. Collect all logs into an organization-level aggregated log sink and send the logs to a Pub/Sub topic. Implement a primary Dataflow pipeline that consumes logs from this Pub/Sub topic and delivers the logs to the SIEM. Implement a secondary Dataflow pipeline that replays failed messages.
  • C. Deploy a Cloud Logging sink with a filter that routes all logs directly to a syslog endpoint. The endpoint is based on a single Compute Engine hosted on Google Cloud that routes all logs to the on-premises SIEM. Implement a Cloud Function that triggers a retry action in case of failure.
  • D. Utilize custom firewall rules to allow your SIEM to directly query Google Cloud logs. Implement a Cloud Function that notifies the SIEM of a failed delivery and triggers a retry action.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Zek
4 months ago
Selected Answer: B
B - https://cloud.google.com/architecture/stream-logs-from-google-cloud-to-splunk
upvoted 1 times
...
MoAk
4 months, 2 weeks ago
Selected Answer: B
B 100%.
upvoted 1 times
...
KLei
4 months, 4 weeks ago
Selected Answer: B
use pub/sub. A is wrong as it says that "periodically pull logs" - Not near real-time and need programing works.
upvoted 1 times
...
BondleB
5 months, 1 week ago
Selected Answer: B
https://cloud.google.com/architecture/stream-logs-from-google-cloud-to-splunk
upvoted 1 times
...
yokoyan
7 months, 1 week ago
Selected Answer: B
I think it's B.
upvoted 1 times
...

Question 272

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 272 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 272
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You work for a global company. Due to compliance requirements, certain Compute Engine instances that reside within specific projects must be located exclusively in cloud regions within the European Union (EU). You need to ensure that existing non-compliant workloads are remediated and prevent future Compute Engine instances from being launched in restricted regions. What should you do?

  • A. Use a third-party configuration management tool to monitor the location of Compute Engine instances. Automatically delete or migrate non-compliant instances, including existing deployments.
  • B. Deploy a Security Command Center source to detect Compute Engine instances created outside the EU. Use a custom remediation function to automatically relocate the instances, run the function once a day.
  • C. Use organization policy constraints in Resource Manager to enforce allowed regions for Compute Engine instance creation within specific projects.
  • D. Set an organization policy that denies the creation of Compute Engine instances outside the EU. Apply the policy to the appropriate projects. Identify existing non-compliant instances and migrate the instances to compliant EU regions.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Pime13
4 months ago
Selected Answer: D
https://cloud.google.com/resource-manager/docs/organization-policy/defining-locations-supported-services#compute-engine For example, an instance template is a global resource, but you might specify regional or zonal disks in an instance template. Those disks are subject to the resource locations constraints, so, in your instance template, you must specify disks in regions and zones that your org policy permits.
upvoted 1 times
...
Zek
4 months ago
Selected Answer: D
https://cloud.google.com/resource-manager/docs/organization-policy/defining-locations-supported-services
upvoted 1 times
...
MoAk
4 months, 2 weeks ago
Selected Answer: D
https://cloud.google.com/resource-manager/docs/organization-policy/defining-locations-supported-services
upvoted 1 times
...
yokoyan
7 months, 1 week ago
Selected Answer: D
I think it's D.
upvoted 3 times
MoAk
4 months, 2 weeks ago
https://cloud.google.com/resource-manager/docs/organization-policy/defining-locations-supported-services
upvoted 1 times
...
...

Question 273

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 273 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 273
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are working with developers to secure custom training jobs running on Vertex AI. For compliance reasons, all supported data types must be encrypted by key materials that reside in the Europe region and are controlled by your organization. The encryption activity must not impact the training operation in Vertex AI. What should you do?

  • A. Encrypt the code, training data, and metadata with Google default encryption. Use customer-managed encryption keys (CMEK) for the trained models exported to Cloud Storage buckets.
  • B. Encrypt the code, training data, metadata, and exported trained models with customer-managed encryption keys (CMEK).
  • C. Encrypt the code, training data, and exported trained models with customer-managed encryption keys (CMEK).
  • D. Encrypt the code, training data, and metadata with Google default encryption. Implement an organization policy that enforces a constraint to restrict the Cloud KMS location to the Europe region.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Pime13
4 months ago
Selected Answer: C
In general, the CMEK key does not encrypt metadata associated with your operation, like the job's name and region, or a dataset's display name. Metadata associated with operations is always encrypted using Google's default encryption mechanism. https://cloud.google.com/vertex-ai/docs/general/cmek
upvoted 1 times
...
Zek
4 months ago
Selected Answer: C
C sounds right https://cloud.google.com/vertex-ai/docs/general/cmek#resources In general, the CMEK key does not encrypt metadata associated with your operation, like the job's name and region, or a dataset's display name. Metadata associated with operations is always encrypted using Google's default encryption mechanism.
upvoted 1 times
...
kalbd2212
4 months, 1 week ago
Selected Answer: C
Ans is C Guys before recommending an answer please read the doc. In general, the CMEK key does not encrypt metadata associated with your operation, like the job's name and region, or a dataset's display name. Metadata associated with operations is always encrypted using Google's default encryption mechanism. https://cloud.google.com/vertex-ai/docs/general/cmek#benefits
upvoted 1 times
...
nah99
4 months, 2 weeks ago
Selected Answer: C
C seems best. NOT B: "In general, the CMEK key does not encrypt metadata associated with your operation" NOT D: "If you want to control your encryption keys, then you can use customer-managed encryption keys (CMEKs) " https://cloud.google.com/vertex-ai/docs/general/cmek#resources
upvoted 1 times
...
3fd692e
5 months ago
Selected Answer: B
B is correct. D looks good but uses Google Managed Encryption Keys which violates the requirement of control the encryption resources outlined in the question.
upvoted 2 times
...
BondleB
5 months, 1 week ago
Selected Answer: D
Option D enforces that all supported data types must be encrypted by key materials that reside in the Europe region.
upvoted 2 times
...
dat987
6 months ago
Answer is C The CMEK key doesn't encrypt metadata, like the instance's name and region, associated with your Vertex AI Workbench instance. Metadata associated with Vertex AI Workbench instances is always encrypted using Google's default encryption mechanism.
upvoted 2 times
...
yokoyan
7 months, 1 week ago
Selected Answer: B
I think it's B.
upvoted 1 times
BondleB
5 months, 1 week ago
In general, the CMEK key does not encrypt metadata associated with your operation, like the job's name and region, or a dataset's display name. Metadata associated with operations is always encrypted using Google's default encryption mechanism.
upvoted 1 times
...
...

Question 274

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 274 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 274
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your EU-based organization stores both Personally Identifiable Information (PII) and non-PII data in Cloud Storage buckets across multiple Google Cloud regions. EU data privacy laws require that the PII data must not be stored outside of the EU. To help meet this compliance requirement, you want to detect if Cloud Storage buckets outside of the EU contain healthcare data. What should you do?

  • A. Create a Sensitive Data Protection job. Specify the infoType of data to be detected and run the job across all Google Cloud Storage buckets.
  • B. Create a log sink with a filter on resourceLocation.currentLocations. Trigger an alert if a log message appears with a non- EUcountry.
  • C. Activate Security Command Center Premium. Use compliance monitoring to detect resources that do not follow the applicable healthcare regulation.
  • D. Enforce the gcp.resourceLocations organization policy and add "EU" in a custom rule that only applies on resources with the tag "healthcare".
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
LegoJesus
2 months ago
Selected Answer: C
Answer should be C. A - a data protection job just finds data that might contain PII. If you run it on all buckets in all regions, that won't confirm with the requirements of detecting buckets outside the EU. B - Irrelevant. C - Compliance monitoring in SCC will do this job for you. Just go in, click the compliance you're interested in (e.g. GDPR, healthcare data etc), and it will tell you why you're not compliant and where. D - Irrelevant.
upvoted 1 times
...
MoAk
4 months, 2 weeks ago
Selected Answer: A
Definitely A
upvoted 1 times
...
BondleB
5 months, 1 week ago
Selected Answer: A
Specifying the info Type of data to be detected allows to find storage buckets outside the EU that contain healthcare data.
upvoted 1 times
...

Question 275

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 275 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 275
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization is migrating business critical applications to Google Cloud across multiple projects. You only have the required IAM permission at the Google Cloud organization level. You want to grant project access to support engineers from two partner organizations using their existing identity provider (IdP) credentials. What should you do?

  • A. Create two single sign-on (SSO) profiles for the internal and partner IdPs by using SSO for Cloud Identity.
  • B. Create users manually by using the Google Cloud console. Assign the users to groups.
  • C. Create two workforce identity pools for the partner IdPs.
  • D. Sync user identities from their existing IdPs to Cloud Identity by using Google Cloud Directory Sync (GCDS).
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
jmaquino
5 months ago
Selected Answer: C
Workforce Identity Federation lets you use an external identity provider (IdP) to authenticate and authorize a workforce—a group of users, such as employees, partners, and contractors—using IAM, so that the users can access Google Cloud services. With Workforce Identity Federation you don't need to synchronize user identities from your existing IdP to Google Cloud identities, as you would with Cloud Identity's Google Cloud Directory Sync (GCDS). Workforce Identity Federation extends Google Cloud's identity capabilities to support syncless, attribute-based single sign on.
upvoted 2 times
...
3fd692e
5 months ago
Selected Answer: C
Classic workforce identity use-case because the question references outside identity providers. You wouldn't use GCDS in this scenario.
upvoted 1 times
...
json4u
5 months, 4 weeks ago
Answer is C. This case shows well when to use Work Force Federation.
upvoted 2 times
json4u
5 months, 4 weeks ago
I meant Workforce Identity Federation :)
upvoted 2 times
...
...
dat987
6 months ago
Selected Answer: C
Answer is C
upvoted 3 times
...
yokoyan
7 months, 1 week ago
Selected Answer: D
I think it's D.
upvoted 2 times
yokoyan
4 months, 2 weeks ago
not D. C is correct.
upvoted 1 times
...
KLei
4 months, 3 weeks ago
Google Cloud Directory Sync (GCDS typically applies to syncing users from on-premises directories to Google Workspace
upvoted 2 times
...
...

Question 276

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 276 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 276
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are creating a secure network architecture. You must fully isolate development and production environments, and prevent any network traffic between the two environments. The network team requires that there is only one central entry point to the cloud network from the on-premises environment. What should you do?

  • A. Create one Virtual Private Cloud (VPC) network per environment. Add the on-premises entry point to the production VPC. Peer the VPCs with each other and create firewall rules to prevent traffic.
  • B. Create one shared Virtual Private Cloud (VPC) network and use it as the entry point to the cloud network. Create separate subnets per environment. Create firewall rules to prevent traffic.
  • C. Create one Virtual Private Cloud (VPC) network per environment. Create a VPC Service Controls perimeter per environment and add one environment VPC to each.
  • D. Create one Virtual Private Cloud (VPC) network per environment. Create one additional VPC for the entry point to the cloud network. Peer the entry point VPC with the environment VPCs.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
nah99
4 months, 2 weeks ago
Selected Answer: D
D satisfies all requirements
upvoted 2 times
...
koo_kai
6 months ago
Selected Answer: D
It's D
upvoted 1 times
...
d0fa7d5
7 months ago
Selected Answer: D
d is correct?
upvoted 1 times
...
SQLbox
7 months ago
C , due to you must fully isolate development and production environments, and prevent any network traffic between the two environments
upvoted 1 times
...
yokoyan
7 months, 1 week ago
Selected Answer: C
I think it's C.
upvoted 1 times
1e22522
7 months ago
VPC Service Controls help protect data and manage access but do not provide the same level of network isolation as creating separate VPCs. Service Controls are more about data access and security policies rather than network segmentation. Thus, Option D is the most suitable approach for achieving the required isolation and centralized network entry point.
upvoted 4 times
...
...

Question 277

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 277 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 277
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You work for a large organization that is using Cloud Identity as the identity provider (IdP) on Google Cloud. Your InfoSec team has mandated the enforcement of a strong password with a length between 12 and 16 characters for all users. After configuring this requirement, users are still able to access the Google Cloud console with passwords that are less than 12 characters. You need to fix this problem within the Admin console. What should you do?

  • A. Review each user's password configuration and reset existing passwords.
  • B. Review the organization password management setting and select Enforce password policy at the next sign-in.
  • C. Review each user's password configuration and select Enforce strong password.
  • D. Review the organization password management setting and select Enforce strong password.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
dat987
Highly Voted 6 months ago
Selected Answer: B
Answer is B https://support.google.com/a/answer/139399?hl=en
upvoted 6 times
...
KLei
Most Recent 4 months, 3 weeks ago
Selected Answer: B
b is the best ans
upvoted 1 times
...
dv1
5 months, 2 weeks ago
Sorry, I meant to write "therefore option B is best".
upvoted 2 times
...
dv1
5 months, 3 weeks ago
Selected Answer: B
According to the question, strong password policy is already enforced and we only need to fix the ones that still use short passwords, therefore option D is best.
upvoted 2 times
...
yokoyan
7 months, 1 week ago
Selected Answer: D
I think it's D.
upvoted 3 times
yokoyan
4 months, 2 weeks ago
B is correct.
upvoted 2 times
...
...

Question 278

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 278 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 278
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization is preparing to build business services in Google Cloud for the first time. You must determine where to apply appropriate controls or policies. You must also identify what aspects of your cloud deployment are managed by Google. What should you do?

  • A. Model your deployment on the Google Enterprise foundations blueprint. Follow the blueprint exactly and rely on the blueprint to maintain the posture necessary for your business.
  • B. Use the Risk Manager tool in the Risk Protection Program to generate a report on your cloud security posture. Obtain cyber insurance coverage.
  • C. Subscribe to the Google Cloud release notes to keep up on product updates and when new services are available. Evaluate new services for appropriate use before enabling their API.
  • D. Study the shared responsibilities model. Depending on your business scenario, you might need to consider your responsibilities based on the location of your business offices, your customers, and your data.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
MoAk
4 months, 2 weeks ago
Selected Answer: D
They love to bang on about this
upvoted 1 times
...
yokoyan
7 months, 1 week ago
Selected Answer: D
I think it's D.
upvoted 2 times
...

Question 280

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 280 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 280
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization has an application hosted in Cloud Run. You must control access to the application by using Cloud Identity-Aware Proxy (IAP) with these requirements:

• Only users from the AppDev group may have access.
• Access must be restricted to internal network IP addresses.

What should you do?

  • A. Deploy a VPN gateway and instruct the AppDev group to connect to the company network before accessing the application.
  • B. Create an access level that includes conditions for internal IP address ranges and AppDev groups. Apply this access level to the application's IAP policy.
  • C. Configure firewall rules to limit access to IAP based on the AppDev group and source IP addresses.
  • D. Configure IAP to enforce multi-factor authentication (MFA) for all users and use network intrusion detection systems (NIDS) to block unauthorized access attempts.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Zek
4 months ago
Selected Answer: B
An access level is a set of attributes assigned to requests based on their origin. Using information such as device type, IP address, and user identity, you can designate what level of access to grant. https://cloud.google.com/beyondcorp-enterprise/docs/access-levels
upvoted 1 times
...
yokoyan
7 months, 1 week ago
Selected Answer: B
I think it's B.
upvoted 3 times
...

Question 281

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 281 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 281
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You just implemented a Secure Web Proxy instance on Google Cloud for your organization. You were able to reach the internet when you tested this configuration on your test instance. However, developers cannot access the allowed URLs on the Secure Web Proxy instance from their Linux instance on Google Cloud. You want to solve this problem with developers. What should you do?

  • A. Configure a Cloud NAT gateway to enable internet access from the developer instance subnet.
  • B. Ensure that the developers have restarted their instance and HTTP service is enabled.
  • C. Ensure that the developers have explicitly configured the proxy address on their instance.
  • D. Configure a firewall rule to allow HTTP/S from the developer instance.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Zek
4 months ago
Selected Answer: C
https://cloud.google.com/secure-web-proxy/docs/overview Secure Web Proxy is a cloud first service that helps you secure egress web traffic (HTTP/S). You configure your clients to explicitly use Secure Web Proxy as a gateway.
upvoted 1 times
...
Pime13
4 months ago
Selected Answer: C
This step is crucial because Secure Web Proxy acts as an explicit proxy server, which requires clients to have the proxy address configured on their instances to route traffic through the proxy https://cloud.google.com/secure-web-proxy/docs/quickstart https://cloud.google.com/secure-web-proxy/docs/policies-overview
upvoted 1 times
...
MoAk
4 months, 2 weeks ago
Selected Answer: C
C is good.
upvoted 1 times
...
yokoyan
7 months, 1 week ago
Selected Answer: C
I think it's C.
upvoted 1 times
...

Question 282

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 282 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 282
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You have just created a new log bucket to replace the _Default log bucket. You want to route all log entries that are currently routed to the _Default log bucket to this new log bucket, in the most efficient manner. What should you do?

  • A. Create exclusion filters for the _Default sink to prevent it from receiving new logs. Create a user-defined sink, and select the new log bucket as the sink destination.
  • B. Disable the _Default sink. Create a user-defined sink and select the new log bucket as the sink destination.
  • C. Create a user-defined sink with inclusion filters copied from the _Default sink. Select the new log bucket as the sink destination.
  • D. Edit the _Default sink, and select the new log bucket as the sink destination.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Pime13
4 months ago
Selected Answer: D
https://cloud.google.com/logging/docs/buckets#manage_buckets
upvoted 1 times
...
nah99
4 months, 2 weeks ago
Selected Answer: D
D is most efficient and is possible to do. I just checked in GCP b/c people using AI as their source in this forum is a major red flag
upvoted 2 times
...
3fd692e
5 months ago
Selected Answer: D
D is correct
upvoted 1 times
...
koo_kai
6 months ago
Selected Answer: D
I think it's D
upvoted 2 times
...
brpjp
6 months, 3 weeks ago
D is correct answer, you can change the log destination for existing sink without creating new sink. as per Gemini.
upvoted 4 times
...
yokoyan
7 months, 1 week ago
Selected Answer: C
I think it's C.
upvoted 1 times
...

Question 283

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 283 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 283
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization's use of the Google Cloud has grown substantially and there are many different groups using different cloud resources independently. You must identify common misconfigurations and compliance violations across the organization and track findings for remedial action in a dashboard. What should you do?

  • A. Create a filter set in Cloud Asset Inventory to identify service accounts with high privileges and IAM principals with Gmail domains.
  • B. Scan and alert vulnerabilities and misconfigurations by using Secure Health Analytics detectors in Security Command Center Premium.
  • C. Set up filters on Cloud Audit Logs to flag log entries for specific, risky API calls, and display the calls in a Cloud Log Analytics dashboard.
  • D. Alert and track emerging attacks detected in your environment by using Event Threat Detection detectors.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Pime13
4 months ago
Selected Answer: B
https://cloud.google.com/security-command-center/docs/concepts-security-health-analytics Security Health Analytics is a managed service of Security Command Center that scans your cloud environments for common misconfigurations that might expose you to attack. Security Health Analytics is automatically enabled when you activate Security Command Center
upvoted 1 times
...
MoAk
4 months, 2 weeks ago
Selected Answer: B
https://cloud.google.com/security-command-center/docs/concepts-security-health-analytics
upvoted 1 times
...
yokoyan
7 months, 1 week ago
Selected Answer: B
I think it's B.
upvoted 2 times
...

Question 284

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 284 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 284
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are responsible for a set of Cloud Functions running on your organization's Google Cloud environment. During the last annual security review, secrets were identified in environment variables of some of these Cloud Functions. You must ensure that secrets are identified in a timely manner. What should you do?

  • A. Implement regular peer reviews to assess the environment variables and identify secrets in your Cloud Functions. Raise a security incident if secrets are discovered.
  • B. Implement a Cloud Function that scans the environment variables multiple times a day, and creates a finding in Security Command Center if secrets are discovered.
  • C. Use Sensitive Data Protection to scan the environment variables multiple times per day, and create a finding in Security Command Center if secrets are discovered.
  • D. Integrate dynamic application security testing into the CI/CD pipeline that scans the application code for the Cloud Functions. Fail the build process if secrets are discovered.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
nah99
4 months, 2 weeks ago
Selected Answer: C
https://cloud.google.com/sensitive-data-protection/docs/secrets-discovery#why
upvoted 2 times
...
KLei
4 months, 3 weeks ago
Selected Answer: C
(Dynamic application security testing): While this can help identify secrets in the code, it does not specifically address the secrets that may be present in environment variables
upvoted 1 times
...
dv1
5 months, 3 weeks ago
Selected Answer: C
Question asks for secret identification, not blocking the cloud runs if exposed secrets are detected (what D says).
upvoted 2 times
...
dat987
6 months ago
Selected Answer: C
I think C: To perform secrets discovery, you create a discovery scan configuration at the organization or project level. Within your selected scope, Sensitive Data Protection periodically scans Cloud Run functions for secrets in build and runtime environment variables. If a secret is present in an environment variable, Sensitive Data Protection sends a Secrets in environment variables vulnerability finding to Security Command Center. No data profiles are generated. Any findings are only available through Security Command Center. Sensitive Data Protection generates a maximum of one finding per function. For example, if secrets are found in two environment variables in the same function, only one finding is generated in Security Command Center.
upvoted 2 times
...
brpjp
6 months, 3 weeks ago
Correct answer - D. For answer C, you need to integrate Sensitive Data Protection with CI/CD pipelines, which is missing here.
upvoted 3 times
...
yokoyan
7 months, 1 week ago
Selected Answer: D
I think it's D.
upvoted 3 times
...

Question 285

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 285 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 285
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization 1s developing a new SaaS application on Google Cloud. Stringent compliance standards require visibility into privileged account activity, and potentially unauthorized changes and misconfigurations to the application's infrastructure. You need to monitor administrative actions, log changes to IAM roles and permissions, and be able to trace potentially unauthorized configuration changes. What should you do?

  • A. Create log sinks to Cloud Storage for long-term retention. Set up log-based alerts in Cloud Logging based on relevant log types. Enable VPC Flow Logs for network visibility.
  • B. Deploy Cloud IDS and activate Firewall Rules Logging. Create a custom dashboard in Security Command Center to visualize potential intrusion attempts.
  • C. Detect sensitive administrative actions by using Cloud Logging with custom filters. Enable VPC Flow Logs with BigQuery exports for rapid analysis of network traffic patterns.
  • D. Enable Event Threat Detection and Security Health Analytics in Security Command Center. Set up detailed logging for IAM-related activity and relevant project resources by deploying Cloud Audit Logs.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Pime13
4 months ago
Selected Answer: D
https://cloud.google.com/security-command-center/docs/concepts-security-health-analytics
upvoted 1 times
...
MoAk
4 months, 2 weeks ago
Selected Answer: D
misconfigurations = Security Health Analytics
upvoted 1 times
...
yokoyan
7 months, 1 week ago
Selected Answer: D
I think it's D.
upvoted 2 times
...

Question 286

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 286 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 286
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your application development team is releasing a new critical feature. To complete their final testing, they requested 10 thousand real transaction records. The new feature includes format checking on the primary account number (PAN) of a credit card. You must support the request and minimize the risk of unintended personally identifiable information (PII) exposure. What should you do?

  • A. Run the new application by using Confidential Computing to ensure PII and card PAN is encrypted in use.
  • B. Scan and redact PII from the records by using the Cloud Data Loss Prevention API. Perform format-preserving encryption on the card PAN.
  • C. Encrypt the records by using Cloud Key Management Service to protect the PII and card PAN.
  • D. Build a tool to replace the card PAN and PII fields with randomly generated values.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Pime13
4 months ago
Selected Answer: B
https://cloud.google.com/architecture/de-identification-re-identification-pii-using-cloud-dlp https://cloud.google.com/blog/products/identity-security/taking-charge-of-your-data-using-cloud-dlp-to-de-identify-and-obfuscate-sensitive-information. Using the Cloud Data Loss Prevention (DLP) API to scan and redact PII, combined with format-preserving encryption, directly addresses the need to protect sensitive data while maintaining the necessary format for testing. This ensures that the development team can perform their tests without exposing real PII.
upvoted 1 times
...
KLei
4 months, 3 weeks ago
Selected Answer: B
A (Confidential Computing) may not directly address the need to redact and protect PII before testing.
upvoted 1 times
...
dat987
6 months ago
Selected Answer: B
I think B
upvoted 1 times
...
koo_kai
6 months ago
Selected Answer: B
format check
upvoted 2 times
json4u
5 months, 4 weeks ago
B can preserving the format for testing purposes while ensuring that the actual data remains protected. But, A doesn't address the issue of storing or sharing PII securely for testing.
upvoted 1 times
...
...
brpjp
6 months, 3 weeks ago
Answer B is correct. A - is missing this requirement - The new feature includes format checking on the primary account number (PAN) of a credit card. By encrypting you will not preserve the format.
upvoted 4 times
...
Ponchi14
7 months ago
Selected Answer: A
A is correct. Redacting PII beats the purposed of using real transaction records
upvoted 1 times
KLei
4 months, 3 weeks ago
real tx doesn't mean real PAN
upvoted 1 times
...
...
yokoyan
7 months, 1 week ago
Selected Answer: A
I think it's A.
upvoted 1 times
...

Question 287

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 287 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 287
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You work for a banking organization. You are migrating sensitive customer data to Google Cloud that is currently encrypted at rest while on-premises. There are strict regulatory requirements when moving sensitive data to the cloud. Independent of the cloud service provider, you must be able to audit key usage and be able to deny certain types of decrypt requests. You must choose an encryption strategy that will ensure robust security and compliance with the regulations. What should you do?

  • A. Utilize Google default encryption and Cloud IAM to keep the keys within your organization's control.
  • B. Implement Cloud External Key Manager (Cloud EKM) with Access Approval, to integrate with your existing on-premises key management solution.
  • C. Implement Cloud External Key Manager (Cloud EKM) with Key Access Justifications to integrate with your existing one premises key management solution.
  • D. Utilize customer-managed encryption keys (CMEK) created in a dedicated Google Compute Engine instance with Confidential Compute encryption, under your organization's control.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
json4u
Highly Voted 5 months, 4 weeks ago
Answer is C. - Access Approval : This lets you control access to your organization's data by Google personnel. - Key Access Justifications : This provides a justification for every request to access keys stored in an external key manager.
upvoted 5 times
...
Pime13
Most Recent 4 months ago
Selected Answer: C
https://cloud.google.com/kms/docs/ekm#terminology https://cloud.google.com/assured-workloads/key-access-justifications/docs/overview Key Access Justifications When you use Cloud EKM with Key Access Justifications, each request to your external key management partner includes a field that identifies the reason for each request. You can configure your external key management partner to allow or deny requests based on the Key Access Justifications code provided.
upvoted 1 times
...
MoAk
4 months, 3 weeks ago
Selected Answer: C
Answer is C. https://cloud.google.com/kms/docs/ekm#terminology
upvoted 2 times
...
KLei
4 months, 3 weeks ago
Selected Answer: B
C does not offer the same level of access control as Access Approval, which is critical for denying unauthorized decrypt requests.
upvoted 1 times
...
dv1
5 months, 3 weeks ago
Selected Answer: C
Key Access Justifications does what the question asks for.
upvoted 3 times
...
yokoyan
7 months, 1 week ago
Selected Answer: B
I think it's B.
upvoted 2 times
...

Question 288

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 288 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 288
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization is developing an application that will have both corporate and public end-users. You want to centrally manage those customers' identities and authorizations. Corporate end users must access the application by using their corporate user and domain name. What should you do?

  • A. Add the corporate and public end-user domains to domain restricted sharing on the organization.
  • B. Federate the customers' identity provider (IdP) with Workforce Identity Federation in your application's project.
  • C. Do nothing. Google Workspace identities will allow you to filter personal accounts and disable their access.
  • D. Use a customer identity and access management tool (CIAM) like Identity Platform.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
TibiMuhoho
3 months, 4 weeks ago
Selected Answer: D
Workforce Identity Federation is designed for managing external workforce identities, such as contractors or business partners, not public-facing end-users. Therefore, cannot be B.
upvoted 1 times
...
Pime13
4 months ago
Selected Answer: D
Option B suggests federating the customers' identity provider (IdP) with Workforce Identity Federation in your application's project. While Workforce Identity Federation is a powerful tool for integrating external identity providers, it is primarily designed for managing access to Google Cloud resources by external identities, such as contractors or partners, rather than managing end-user identities for an application. Using a customer identity and access management tool (CIAM) like Identity Platform (Option D) is more appropriate because it is specifically designed to handle both corporate and public end-user identities. It provides features like multi-factor authentication, user management, and integration with various identity providers, making it a comprehensive solution for managing diverse user bases.
upvoted 1 times
...
BPzen
4 months, 2 weeks ago
Selected Answer: D
For an application serving both corporate and public end-users, a Customer Identity and Access Management (CIAM) solution is the best approach. Google Cloud Identity Platform provides the tools necessary to centrally manage user authentication and authorization while supporting both corporate and public users. B. Federate the customers' identity provider (IdP) with Workforce Identity Federation in your application's project. Workforce Identity Federation is intended for internal workforce users (employees, contractors) to access Google Cloud resources, not for managing application users. It does not support public users, making it unsuitable for this use case.
upvoted 1 times
...
nah99
4 months, 2 weeks ago
Selected Answer: D
Torn b/w B & D. B just doesn't address the public end users at all. Question seems poorly written (who are the customers..)
upvoted 1 times
...
KLei
4 months, 3 weeks ago
Selected Answer: B
D is incorrect: the question specifically highlights the need for corporate users to access the application using their corporate user credentials, which is best addressed through Workforce Identity Federation.
upvoted 2 times
...
dv1
5 months, 3 weeks ago
"the application will have both corporate AND PUBLIC END-USERS". This means that the solution applies to Identity Platform, therefore D.
upvoted 2 times
...
json4u
5 months, 4 weeks ago
Obviously it's D. - Identity Platform : A customer identity and access management (CIAM) platform that lets users sign in to your applications and services. This is ideal for users who want to be their own identity provider, or who need the enterprise-ready functionality Identity Platform provides. - Workforce Identity Federation : This is an IAM feature that lets you configure and secure granular access for your workforce—employees and partners—by federating identities from an external identity provider (IdP).
upvoted 2 times
...
brpjp
6 months, 3 weeks ago
B is correct - By federating your customers' IdP with WIF, you can provide a seamless authentication experience for your users while maintaining control over identity and access management in your Google Cloud environment.
upvoted 3 times
...
yokoyan
7 months, 1 week ago
Selected Answer: B
I think it's B.
upvoted 1 times
...

Question 289

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 289 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 289
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You work for an organization that handles sensitive customer data. You must secure a series of Google Cloud Storage buckets housing this data and meet these requirements:

• Multiple teams need varying access levels (some read-only, some read-write).
• Data must be protected in storage and at rest.
• It's critical to track file changes and audit access for compliance purposes.
• For compliance purposes, the organization must have control over the encryption keys.

What should you do?

  • A. Create IAM groups for each team and manage permissions at the group level. Employ server-side encryption and Object Versioning by Google Cloud Storage. Configure cloud monitoring tools to alert on anomalous data access patterns.
  • B. Set individual permissions for each team and apply access control lists (ACLs) to each bucket and file. Enforce TLS encryption for file transfers. Enable Object Versioning and Cloud Audit Logs for the storage buckets.
  • C. Use predefined IAM roles tailored to each team's access needs, such as Storage Object Viewer and Storage Object User. Utilize customer-supplied encryption keys (CSEK) and enforce TLS encryption. Turn on both Object Versioning and Cloud Audit Logs for the storage buckets.
  • D. Assign IAM permissions for all teams at the object level. Implement third-party software to encrypt data at rest. Track data access by using network logs.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Pime13
4 months ago
Selected Answer: C
This approach ensures that: Access Control: IAM roles are tailored to each team's needs, providing the principle of least privilege. Data Protection: Customer-supplied encryption keys (CSEK) give your organization control over encryption keys, and TLS encryption protects data in transit. Compliance and Auditing: Object Versioning and Cloud Audit Logs help track file changes and audit access for compliance purposes. https://cloud.google.com/architecture/framework/security/privacy
upvoted 1 times
Pime13
4 months ago
https://cloud.google.com/monitoring/compliance/data-at-rest https://cloud.google.com/blog/products/storage-data-transfer/google-cloud-storage-best-practices-to-help-ensure-data-privacy-and-security
upvoted 1 times
...
...
KLei
4 months, 3 weeks ago
Selected Answer: C
By utilizing CSEK, your organization maintains control over the encryption keys, which is crucial for compliance purposes.
upvoted 2 times
...
yokoyan
7 months, 1 week ago
Selected Answer: C
I think it's C.
upvoted 3 times
json4u
5 months, 4 weeks ago
I agree. Only C satisfies all requirements above.
upvoted 2 times
...
...

Question 290

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 290 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 290
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are implementing communications restrictions for specific services in your Google Cloud organization. Your data analytics team works in a dedicated folder. You need to ensure that access to BigQuery is controlled for that folder and its projects. The data analytics team must be able to control the restrictions only at the folder level. What should you do?

  • A. Create an organization-level access policy with a service perimeter to restrict BigQuery access. Assign the data analytics team the Access Context Manager Editor role on the access policy to allow the team to configure the access policy.
  • B. Create a scoped policy on the folder with a service perimeter to restrict BigQuery access. Assign the data analytics team the Access Context Manager Editor role on the scoped policy to allow the team to configure the scoped policy.
  • C. Define a hierarchical firewall policy on the folder to deny BigQuery access. Assign the data analytics team the Compute Organization Firewall Policy Admin role to allow the team to configure rules for the firewall policy.
  • D. Enforce the Restrict Resource Service Usage organization policy constraint on the folder to restrict BigQuery access. Assign the data analytics team the Organization Policy Administrator role to allow the team to manage exclusions within the folder.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Pime13
4 months ago
Selected Answer: B
This approach allows you to apply a service perimeter specifically to the folder, ensuring that BigQuery access is controlled at the desired level. By assigning the Access Context Manager Editor role to the data analytics team, you enable them to manage the scoped policy as needed.
upvoted 1 times
...
MoAk
4 months, 2 weeks ago
Selected Answer: B
B is good.
upvoted 1 times
...
KLei
4 months, 3 weeks ago
Selected Answer: B
Scoped Policy: A scoped policy allows you to apply restrictions specifically to a folder and its projects Service Perimeter: By using a service perimeter, you can define which services (like BigQuery) can be accessed from within the specified folder.
upvoted 1 times
...
yokoyan
7 months, 1 week ago
Selected Answer: B
I think it's B.
upvoted 3 times
json4u
5 months, 4 weeks ago
I think using a service perimeter is key.
upvoted 1 times
...
...

Question 291

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 291 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 291
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization шs using a third-party identity and authentication provider to centrally manage users. You want to use this identity provider to grant access to the Google Cloud console without syncing identities to Google Cloud. Users should receive permissions based on attributes. What should you do?

  • A. Configure the central identity provider as a workforce identity pool provider in Workforce Identity Federation. Create an attribute mapping by using the Common Expression Language (CEL).
  • B. Configure a periodic synchronization of relevant users and groups with attributes to Cloud Identity. Activate single sign-on by using the Security Assertion Markup Language (SAML).
  • C. Set up the Google Cloud Identity Platform. Configure an external authentication provider by using OpenID Connect and link user accounts based on attributes.
  • D. Activate external identities on the Identity-Aware Proxy. Use the Security Assertion Markup Language (SAML) to configure authentication based on attributes to the central authentication provider.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Pime13
4 months ago
Selected Answer: A
https://cloud.google.com/iam/docs/workforce-identity-federation Workforce Identity Federation lets you use an external identity provider (IdP) to authenticate and authorize a workforce—a group of users, such as employees, partners, and contractors—using IAM, so that the users can access Google Cloud services. With Workforce Identity Federation you don't need to synchronize user identities from your existing IdP to Google Cloud identities, as you would with Cloud Identity's Google Cloud Directory Sync (GCDS). Workforce Identity Federation extends Google Cloud's identity capabilities to support syncless, attribute-based single sign on.
upvoted 1 times
...
MoAk
4 months, 2 weeks ago
Selected Answer: A
A is good.
upvoted 1 times
...
3fd692e
5 months ago
Selected Answer: A
Clearly A.
upvoted 1 times
...
yokoyan
7 months, 1 week ago
Selected Answer: A
I think it's A.
upvoted 4 times
json4u
5 months, 4 weeks ago
I was wrong. Correct answer is C.
upvoted 1 times
json4u
5 months, 3 weeks ago
I wish I could delete my reply. It's A obviously. https://cloud.google.com/iam/docs/workforce-identity-federation
upvoted 2 times
...
...
...

Question 292

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 292 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 292
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are implementing a new web application on Google Cloud that will be accessed from your on-premises network. To provide protection from threats like malware, you must implement transport layer security (TLS) interception for incoming traffic to your application. What should you do?

  • A. Configure Secure Web Proxy. Offload the TLS traffic in the load balancer, inspect the traffic, and forward the traffic to the web application.
  • B. Configure an internal proxy load balancer. Offload the TLS traffic in the load balancer inspect, the traffic and forward the traffic to the web application.
  • C. Configure a hierarchical firewall policy. Enable TLS interception by using Cloud Next Generation Firewall (NGFW) Enterprise.
  • D. Configure a VPC firewall rule. Enable TLS interception by using Cloud Next Generation Firewall (NGFW) Enterprise.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
YourFriendlyNeighborhoodSpider
3 weeks, 2 days ago
Selected Answer: C
Google Cloud's Cloud Next Generation Firewall (NGFW) Enterprise includes TLS inspection capabilities, which allow you to decrypt and inspect encrypted traffic for threats before it reaches your web application. This is essential for protecting against malware and other threats embedded in encrypted traffic. A hierarchical firewall policy allows you to enforce firewall rules at the organization or folder level, ensuring consistent security policies across multiple projects. Why Not the Other Options? A. Secure Web Proxy + Load Balancer Google Cloud does not offer a native Secure Web Proxy with TLS interception for incoming traffic. Load balancers in Google Cloud do not provide deep TLS interception for security inspection.
upvoted 1 times
...
Popa
1 month, 2 weeks ago
Selected Answer: A
Here’s why: Secure Web Proxy is specifically designed to provide advanced security measures, including TLS interception. It allows you to offload the TLS traffic from the load balancer, inspect it for threats, and then forward it to your web application. This method ensures that incoming traffic is thoroughly inspected for malware and other threats before reaching your application, providing a secure environment.
upvoted 2 times
YourFriendlyNeighborhoodSpider
3 weeks, 2 days ago
This is not true. Google Cloud does not offer a native Secure Web Proxy with TLS interception for incoming traffic. Load balancers in Google Cloud do not provide deep TLS interception for security inspection.
upvoted 1 times
...
...
JohnDohertyDoe
3 months, 1 week ago
Selected Answer: C
C is the right answer, you cannot enable TLS inspection for a simple firewall rule. You would need to add it to a Hierarchical Policy or a Global Firewall policy.
upvoted 1 times
...
Zek
4 months ago
Selected Answer: C
https://cloud.google.com/firewall/docs/about-firewalls Cloud NGFW implements network and hierarchical firewall policies that can be attached to a resource hierarchy node. These policies provide a consistent firewall experience across the Google Cloud resource hierarchy.
upvoted 1 times
...
Pime13
4 months ago
Selected Answer: A
https://cloud.google.com/secure-web-proxy/docs/tls-inspection-overview Secure Web Proxy provides a TLS inspection service that allows you to intercept, inspect, and enforce security policies on TLS traffic. This approach ensures that incoming traffic is thoroughly inspected for threats before reaching your application.
upvoted 1 times
...
BPzen
4 months, 2 weeks ago
Selected Answer: C
Why C is Correct: Hierarchical Firewall Policy: A hierarchical firewall policy allows you to enforce consistent firewall rules across an organization, folders, or projects. Configuring TLS interception within this policy ensures that all relevant traffic passing through the policy can be decrypted, inspected, and then forwarded. A. Configure Secure Web Proxy. Offload the TLS traffic in the load balancer, inspect the traffic, and forward the traffic to the web application. Secure Web Proxy is not designed to handle incoming traffic for web applications in Google Cloud; it is typically used for outbound traffic filtering. This approach would not address the requirement to protect incoming traffic with TLS interception.
upvoted 1 times
...
MoAk
4 months, 2 weeks ago
Selected Answer: C
https://cloud.google.com/firewall/docs/about-tls-inspection
upvoted 2 times
...
KLei
4 months, 3 weeks ago
Selected Answer: A
Secure Web Proxy: This setup allows you to intercept and inspect TLS traffic securely. By configuring a Secure Web Proxy, you can manage incoming traffic more effectively and implement security measures against threats. TLS Offloading at the Load Balancer: By offloading TLS traffic at the load balancer, you can decrypt and inspect the traffic before forwarding it to your web application.
upvoted 1 times
KLei
4 months, 3 weeks ago
Sorry, seems D is better as secure web proxy is for outgoing traffic while next gen firewall is for both incoming and outgoing traffic.
upvoted 1 times
...
...
junb
5 months, 3 weeks ago
C is Correct
upvoted 1 times
...
BB_norway
6 months, 3 weeks ago
Selected Answer: D
With the Enterprise tier we can intercept TLS traffic
upvoted 3 times
json4u
5 months, 4 weeks ago
Ofcourse it's D. Secure Web Proxy primarily handles outbound (egress) web traffic. Next Generation Firewall (NGFW) Enterprise supports TLS interception also, and it's a better fit for this scenario involving traffic protection for a web application accessed from an on-premises network.
upvoted 2 times
...
...
ABotha
7 months ago
B is correct. Secure Web Proxy is typically used for external traffic, not internal traffic from an on-premises network.
upvoted 2 times
Pach1211
6 months, 3 weeks ago
An internal proxy load balancer is designed for load balancing within the Google Cloud environment and is not suitable for intercepting and inspecting TLS traffic from external sources, such as traffic coming from an on-premises network to a web application hosted on Google Cloud.
upvoted 1 times
...
...
yokoyan
7 months, 1 week ago
Selected Answer: A
I think it's A.
upvoted 2 times
...

Question 293

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 293 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 293
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization has hired a small, temporary partner team for 18 months. The temporary team will work alongside your DevOps team to develop your organization's application that is hosted on Google Cloud. You must give the temporary partner team access to your application's resources on Google Cloud and ensure that partner employees lose access. If they are removed from their employer's organization. What should you do?

  • A. Create a temporary username and password for the temporary partner team members. Auto-clean the usernames and passwords after the work engagement has ended.
  • B. Create a workforce identity pool and federate the identity pool with the identity provider (IdP) of the temporary partner team.
  • C. Implement just-in-time privileged access to Google Cloud for the temporary partner team.
  • D. Add the identities of the temporary partner team members to your identity provider (IdP).
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Pime13
4 months ago
Selected Answer: B
b: https://cloud.google.com/iam/docs/workforce-identity-federation https://cloud.google.com/iam/docs/temporary-elevated-access One way to protect sensitive resources is to limit access to them. However, limiting access to sensitive resources also creates friction for anyone who occasionally needs to access those resources. For example, a user might need break-glass, or emergency, access to sensitive resources to resolve an incident. In these situations, we recommend giving the user permission to access the resource temporarily. We also recommend that, to improve auditing, you record the user's justification for accessing the resource.
upvoted 1 times
...
MoAk
4 months, 3 weeks ago
Selected Answer: B
Answer is B
upvoted 1 times
...
yokoyan
7 months, 1 week ago
I think it's B.
upvoted 4 times
...

Question 294

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 294 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 294
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization has an internet-facing application behind a load balancer. Your regulators require end-to-end encryption of user login credentials. You must implement this requirement. What should you do?

  • A. Generate a symmetric key with Cloud KMS. Encrypt client-side user credentials by using the symmetric key.
  • B. Concatenate the credential with a timestamp. Submit the timestamp and hashed value of credentials to the network.
  • C. Deploy the TLS certificate at Google Cloud Global HTTPs Load Balancer, and submit the user credentials through HTTPs.
  • D. Generate an asymmetric key with Cloud KMS. Encrypt client-side user credentials using the public key.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
zanhsieh
3 months, 4 weeks ago
Selected Answer: D
I take D. End-to-end encryption should overweight scalability as the question described as "require".
upvoted 1 times
...
MoAk
4 months, 3 weeks ago
Selected Answer: C
Initially I was with D however it then didn't seem very scalable option. I believe this is now Answer C. The load balancer would decrypt the connection to inspect the packets at L7 but would re-encrypt it (SSL bridging) for full end to end encryption. https://cloud.google.com/docs/security/encryption-in-transit#transport_layer_security
upvoted 1 times
...
f36bdb5
5 months ago
Selected Answer: D
In case of C, the Load Balancer would strip the TLS connection, making it not end to end.
upvoted 1 times
MoAk
4 months, 2 weeks ago
Negative, LBs can indeed carry out SSL bridging.
upvoted 1 times
...
...
jmaquino
5 months, 1 week ago
Selected Answer: C
C:
upvoted 2 times
...
yokoyan
7 months, 1 week ago
Selected Answer: C
I think it's C.
upvoted 4 times
...

Question 295

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 295 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 295
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization heavily utilizes serverless applications while prioritizing security best practices. You are responsible for enforcing image provenance and compliance with security standards before deployment. You leverage Cloud Build as your continuous integration and continuous deployment (CI/CD) tool for building container images. You must configure Binary Authorization to ensure that only images built by your Cloud Build pipeline are deployed and that the images pass security standard compliance checks. What should you do?

  • A. Create a Binary Authorization attestor that uses a scanner to assess source code management repositories. Deploy images only if the attestor validates results against a security policy.
  • B. Create a Binary Authorization attestor that utilizes a scanner to evaluate container image build processes. Define a policy that requires deployment of images only if this attestation is present.
  • C. Create a Binary Authorization attestor that retrieves the Cloud Build build ID of the container image. Configure a policy to allow deployment only if there's a matching build ID attestation.
  • D. Utilize a custom Security Health Analytics module to create a policy. Enforce the policy through Binary Authorization to prevent deployment of images that do not meet predefined security standards.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
KLei
4 months, 3 weeks ago
Selected Answer: C
Google has built-in security container. So not scanner is need. opt out A B Add a Verification Method: Use a built-in security scanner (e.g., Container Analysis) to evaluate the image against compliance policies.
upvoted 1 times
...
jmaquino
5 months, 1 week ago
C, but I have doubts about this part: and that the images pass security standard compliance checks. What should you do? Because Security Command Center can do that
upvoted 1 times
...
jmaquino
5 months, 1 week ago
C: Binary Authorization (overview) is a Google Cloud product that enforces deploy-time constraints on applications. Its Google Kubernetes Engine (GKE) integration allows users to enforce that containers deployed to a Kubernetes cluster are cryptographically signed by a trusted authority and verified by a Binary Authorization attestor. You can configure Binary Authorization to require attestations based on the location of the source code to prevent container images built from unauthorized source from being deployed.
upvoted 1 times
...
json4u
5 months, 4 weeks ago
Selected Answer: C
I think it's C.
upvoted 1 times
...
abdelrahman89
6 months, 1 week ago
C - Image Provenance: By using the Cloud Build build ID as the attestation, you can directly link the deployed image to the specific build process that created it. This ensures that only images built by your trusted CI/CD pipeline are deployed. Security Standards Compliance: You can integrate security checks into your Cloud Build pipeline, such as vulnerability scanning or compliance audits. If an image fails these checks, the build process can be aborted, preventing the creation of a non-compliant image. Policy Enforcement: The Binary Authorization policy ensures that only images with the correct build ID attestation are deployed, effectively enforcing the security standards you've defined in your CI/CD pipeline.
upvoted 1 times
jmaquino
5 months, 1 week ago
I think it would be C, if this part were not there: that the images pass security standard compliance checks. What should you do? Binary Authorization can integrate with Security Command Center to provide Single Pane of Glass view for Policy Violations. Log Violations to Audit logging. Integrate with KMS for signing the image. Also integrate with Cloud Build, GKE and Cloud Run for Deployments. It can also integrate with 3rd Party Solutions like Cloudbees, Palo Alto Networks & Terraform
upvoted 1 times
...
...

Question 296

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 296 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 296
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization operates in a highly regulated industry and uses multiple Google Cloud services. You need to identify potential risks to regulatory compliance. Which situation introduces the greatest risk?

  • A. The security team mandates the use of customer-managed encryption keys (CMEK) for all data classified as sensitive.
  • B. Sensitive data is stored in a Cloud Storage bucket with the uniform bucket-level access setting enabled.
  • C. The audit team needs access to Cloud Audit Logs related to managed services like BigQuery.
  • D. Principals have broad IAM roles allowing the creation and management of Compute Engine VMs without a pre-defined hardening process.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
json4u
5 months, 4 weeks ago
Selected Answer: D
It's D of course.
upvoted 2 times
...
abdelrahman89
6 months, 1 week ago
D - Lack of Control: This situation grants individuals broad permissions to create and manage VMs without ensuring that they adhere to necessary security standards. This lack of control can lead to the creation of vulnerable or non-compliant systems. Regulatory Implications: Depending on your industry and specific regulations, having unhardened systems can expose your organization to significant risks, such as data breaches, unauthorized access, or non-compliance with security requirements.
upvoted 3 times
...

Question 297

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 297 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 297
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your multinational organization is undergoing rapid expansion within Google Cloud. New teams and projects are added frequently. You are concerned about the potential for inconsistent security policy application and permission sprawl across the organization. You must enforce consistent standards while maintaining the autonomy of regional teams. You need to design a strategy to effectively manage IAM and organization policies at scale, ensuring security and administrative efficiency. What should you do?

  • A. Create detailed organization-wide policies for common scenarios. Instruct teams to apply the policies carefully at the project and resource level as needed.
  • B. Delegate the creation of organization policies to regional teams. Centrally review these policies for compliance before deployment.
  • C. Define a small set of essential organization policies. Supplement these policies with a library of optional policy templates for teams to leverage as needed.
  • D. Use a hierarchical structure of folders. Implement template-based organization policies that cascade down, allowing limited customization by regional teams.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
json4u
5 months, 4 weeks ago
Selected Answer: D
I'm sure it's D.
upvoted 1 times
...
abdelrahman89
6 months, 1 week ago
D - Hierarchical Structure: Organizing your Google Cloud environment into a hierarchical structure of folders provides a natural way to group resources and apply policies at different levels. Template-Based Policies: Creating template-based organization policies allows you to define a set of common policies that can be applied across multiple folders and projects. This ensures consistency and reduces the risk of errors. Cascade Down: By cascading policies down the hierarchy, you can ensure that policies are applied at the appropriate level, while still allowing regional teams to customize them within defined limits. This balances the need for consistency with the desire for autonomy.
upvoted 3 times
...

Question 298

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 298 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 298
Topic #: 1
[All Professional Cloud Security Engineer Questions]

A security audit uncovered several inconsistencies in your project's Identity and Access Management (IAM) configuration. Some service accounts have overly permissive roles, and a few external collaborators have more access than necessary. You need to gain detailed visibility into changes to IAM policies, user activity, service account behavior, and access to sensitive projects. What should you do?

  • A. Configure Google Cloud Functions to be triggered by changes to IAM policies. Analyze changes by using the policy simulator, send alerts upon risky modifications, and store event details.
  • B. Enable the metrics explorer in Cloud Monitoring to follow the service account authentication events and build alerts linked on it.
  • C. Use Cloud Audit Logs. Create log export sinks to send these logs to a security information and event management (SIEM) solution for correlation with other event sources.
  • D. Deploy the OS Config Management agent to your VMs. Use OS Config Management to create patch management jobs and monitor system modifications.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Pime13
4 months ago
Selected Answer: C
This approach allows you to monitor and analyze IAM changes comprehensively, ensuring that you can detect and respond to any security issues effectively https://cloud.google.com/iam/docs/audit-logging
upvoted 1 times
...
json4u
5 months, 4 weeks ago
Selected Answer: C
It's C
upvoted 1 times
...
abdelrahman89
6 months, 1 week ago
C - Comprehensive Logging: Cloud Audit Logs capture a wide range of activities, including IAM policy changes, user logins, API calls, and resource access. This provides a comprehensive view of your organization's IAM activity. Log Export: By creating log export sinks, you can send Cloud Audit Logs to a SIEM solution, where they can be correlated with other event sources to identify potential security threats. Detailed Analysis: SIEM solutions can provide advanced analytics and reporting capabilities, allowing you to analyze IAM changes, detect anomalies, and identify potential security risks.
upvoted 3 times
...

Question 299

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 299 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 299
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You manage multiple internal-only applications that are hosted within different Google Cloud projects. You are deploying a new application that requires external internet access. To maintain security, you want to clearly separate this new application from internal systems. Your solution must have effective security isolation for the new externally-facing application. What should you do?

  • A. Deploy the application within the same project as an internal application. Use a Shared VPC model to manage network configurations.
  • B. Place the application in the same project as an existing internal application, and adjust firewall rules to allow external traffic.
  • C. Create a VPC Service Controls perimeter, and place the new application’s project within that perimeter.
  • D. Create a new project for the application, and use VPC Network Peering to access necessary resources in the internal projects.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
YourFriendlyNeighborhoodSpider
3 weeks, 2 days ago
Selected Answer: D
Answer is D, because that way you have complete isolation as required in the question. Explanation for C: C. Use VPC Service Controls (VPC-SC) is useful for protecting data from being exfiltrated, but it does not isolate an externally-facing app from internal systems. It is more suited for controlling access to sensitive APIs and services, not for network-level isolation.
upvoted 1 times
...
p981pa123
2 months, 2 weeks ago
Selected Answer: C
You need stronger security and isolation for the externally-facing application and want to prevent unintended data access or leakage.
upvoted 1 times
...
LaxmanTiwari
3 months, 2 weeks ago
Selected Answer: D
Agree with Pime13
upvoted 2 times
...
Pime13
4 months ago
Selected Answer: D
Option C suggests creating a VPC Service Controls perimeter and placing the new application’s project within that perimeter. While VPC Service Controls can enhance security by defining a security perimeter around Google Cloud resources, it is primarily designed to protect data from being exfiltrated to unauthorized networks or users. It does not inherently provide the level of isolation needed for an externally-facing application. Creating a new project (Option D) ensures complete separation of resources, IAM policies, and network configurations, which is crucial for maintaining security isolation between internal and external applications. This approach minimizes the risk of accidental exposure of internal resources to the internet.
upvoted 2 times
...
vamgcp
4 months, 2 weeks ago
Selected Answer: D
While VPC Service Controls offer strong isolation, they might be overkill for this scenario involving internal applications with moderate security needs.
upvoted 2 times
...
f36bdb5
5 months ago
Selected Answer: C
It does not say anywhere that the external application should access internal resources. VPC peering would then be a massive security risk
upvoted 4 times
MoAk
4 months, 1 week ago
Indeed. AND the Q clearly states 'effective security isolation'. This is VPC SCs
upvoted 2 times
...
...
json4u
5 months, 4 weeks ago
Selected Answer: D
It's D
upvoted 1 times
...
abdelrahman89
6 months, 1 week ago
D - Dedicated Project: Creating a new project for the externally-facing application provides a clear separation from internal systems, reducing the risk of unauthorized access or lateral movement. VPC Network Peering: Using VPC Network Peering allows the new project to access resources in the internal projects, while maintaining a controlled and secure boundary. This ensures that external traffic cannot directly access internal resources without going through the established peering connection. Improved Security: This approach offers enhanced security by minimizing the attack surface and limiting the potential impact of a breach.
upvoted 1 times
...

Question 300

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 300 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 300
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You work for an ecommerce company that stores sensitive customer data across multiple Google Cloud regions. The development team has built a new 3-tier application to process orders and must integrate the application into the production environment.

You must design the network architecture to ensure strong security boundaries and isolation for the new application, facilitate secure remote maintenance by authorized third-party vendors, and follow the principle of least privilege. What should you do?

  • A. Create separate VPC networks for each tier. Use VPC peering between application tiers and other required VPCs. Provide vendors with SSH keys and root access only to the instances within the VPC for maintenance purposes.
  • B. Create a single VPC network and create different subnets for each tier. Create a new Google project specifically for the third-party vendors and grant the network admin role to the vendors. Deploy a VPN appliance and rely on the vendors’ configurations to secure third-party access.
  • C. Create separate VPC networks for each tier. Use VPC peering between application tiers and other required VPCs. Enable Identity-Aware Proxy (IAP) for remote access to management resources, limiting access to authorized vendors.
  • D. Create a single VPC network and create different subnets for each tier. Create a new Google project specifically for the third-party vendors. Grant the vendors ownership of that project and the ability to modify the Shared VPC configuration.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Pime13
4 months ago
Selected Answer: C
This approach ensures that each tier of the application is isolated within its own VPC, enhancing security. VPC peering allows necessary communication between tiers while maintaining isolation. Using Identity-Aware Proxy (IAP) for remote access ensures that only authorized vendors can access management resources, adhering to the principle of least privilege.
upvoted 1 times
...
json4u
5 months, 4 weeks ago
Selected Answer: C
It's C.
upvoted 1 times
...
abdelrahman89
6 months, 1 week ago
C - Separate VPCs: Creating separate VPC networks for each tier provides a strong isolation boundary, reducing the risk of unauthorized access or lateral movement. VPC Peering: Using VPC peering between application tiers and other required VPCs allows for secure communication while maintaining isolation. Identity-Aware Proxy (IAP): Enabling IAP for remote access to management resources provides a secure and controlled way for authorized vendors to access the application. IAP requires authentication and authorization, ensuring that only authorized individuals can access the resources. Least Privilege: This approach adheres to the principle of least privilege by granting vendors only the necessary access to perform their maintenance tasks.
upvoted 2 times
...

Question 301

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 301 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 301
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization is implementing separation of duties in a Google Cloud project. A group of developers must deploy new code, but cannot have permission to change network firewall rules. What should you do?

  • A. Assign the network administrator IAM role to all developers. Tell developers not to change firewall settings.
  • B. Use Access Context Manager to create conditions that allow only authorized administrators to change firewall rules based on attributes such as IP address or device security posture.
  • C. Create and assign two custom IAM roles. Assign the deployer role to control Compute Engine and deployment-related permissions. Assign the network administrator role to manage firewall permissions.
  • D. Grant the editor IAM role to the developer group. Explicitly negate any firewall modification permissions by using IAM deny policies.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
json4u
5 months, 4 weeks ago
Selected Answer: C
It's C.
upvoted 1 times
...
abdelrahman89
6 months, 1 week ago
C - Custom Roles: Creating custom IAM roles allows you to define granular permissions, ensuring that developers only have the necessary access to deploy new code. Separation of Duties: By assigning the deployer role to control Compute Engine and deployment-related permissions, while assigning the network administrator role to manage firewall permissions, you effectively enforce separation of duties. This reduces the risk of unauthorized access or malicious activities. Granular Control: Custom roles provide more granular control over permissions compared to pre-defined roles, allowing you to tailor access to specific tasks.
upvoted 2 times
...

Question 302

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 302 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 302
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You manage a Google Cloud organization with many projects located in various regions around the world. The projects are protected by the same Access Context Manager access policy. You created a new folder that will host two projects that process protected health information (PHI) for US-based customers. The two projects will be separately managed and require stricter protections. You are setting up the VPC Service Controls configuration for the new folder. You must ensure that only US-based personnel can access these projects and restrict Google Cloud API access to only BigQuery and Cloud Storage within these projects. What should you do?

  • A. • Create a scoped access policy, add the new folder under “Select resources to include in the policy,” and assign an administrator under “Manage principals.”
    • For the service perimeter, specify the two new projects as “Resources to protect” in the service perimeter configuration.
    • Set “Restricted services” to “all services,” set “VPC accessible services” to “Selected services,” and specify only BigQuery and Cloud Storage under “Selected services.”
  • B. • Enable Identity Aware Proxy in the new projects.
    • Create an Access Context Manager access level with an “IP Subnetworks” attribute condition set to the US-based corporate IP range.
    • Enable the “Restrict Resource Service Usage” organization policy at the new folder level with an “Allow” policy type and set both “storage.googleapis.com” and “bigquery.googleapis.com” under “Custom values.”
  • C. • Edit the organization-level access policy and add the new folder under “Select resources to include in the policy.”
    • Specify the two new projects as “Resources to protect” in the service perimeter configuration.
    • Set “Restricted services” to “all services,” set “VPC accessible services” to “Selected services,” and specify only BigQuery and Cloud Storage.
    • Edit the existing access level to add a “Geographic locations” condition set to “US.”
  • D. • Configure a Cloud Interconnect connection or a Virtual Private Network (VPN) between the on-premises environment and the Google Cloud organization.
    • Configure the VPC firewall policies within the new projects to only allow connections from the on-premises IP address range.
    • Enable the Restrict Resource Service Usage organization policy on the new folder with an “Allow” policy type, and set both “storage.googleapis.com” and “bigquery.googleapis.com” under “Custom values.”
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
JohnDohertyDoe
3 months, 1 week ago
Selected Answer: A
Editing the existing policy would affect all the projects (question clearly states there are projects all around the world). While A does not cover the US restriction, it seems to be the best answer.
upvoted 1 times
...
MoAk
4 months, 2 weeks ago
Selected Answer: C
Only one that restricts access to US personnel
upvoted 1 times
nah99
4 months, 1 week ago
Could it be D? Question doesn't mention on-prem, but if limiting to the US on-prem IP range, then this gets it done
upvoted 2 times
...
...
vamgcp
4 months, 2 weeks ago
Selected Answer: C
Edits the Organization-Level Access Policy: This ensures that the stricter access controls, including the geographic location restriction, are applied to the new folder and its projects while maintaining the existing policy for other projects in the organization. Service Perimeter: Defining the service perimeter specifically for the two new projects creates a security boundary around the PHI data, preventing data exfiltration. Restricting Services: Limiting access to only BigQuery and Cloud Storage minimizes the potential attack surface and reduces the risk of unauthorized data access to other services. Geographic Location Condition: By adding the "Geographic locations" condition to the existing access level, you ensure that only users accessing the resources from within the US are granted access, meeting the requirement for US-based personnel access.
upvoted 1 times
...
kalbd2212
4 months, 2 weeks ago
going with A
upvoted 1 times
...
kalbd2212
4 months, 2 weeks ago
i don't C is the right answer "Edit the existing access level to add a “Geographic locations” condition set to “US.”" editing the exciting access policy will impact the exciting projects using it
upvoted 2 times
nah99
4 months, 1 week ago
Yep, and they mention there being projects located around the world
upvoted 1 times
...
...
siheom
6 months ago
Selected Answer: C
The best solution to meet the requirements of restricting access to US-based personnel and limiting Google Cloud API access to only BigQuery and Cloud Storage for the two new projects processing PHI is C.
upvoted 3 times
...
abdelrahman89
6 months, 1 week ago
C - Centralized Access Control: Editing the organization-level access policy ensures consistency and reduces the management overhead compared to creating a separate scoped policy. VPC Service Controls for Isolation: Defining the new projects as "Resources to protect" isolates them within the service perimeter. Restricting services to "all services" and then allowing only BigQuery and Cloud Storage provides granular control over API access. Geographic Location Restriction: Adding a "Geographic locations" condition set to "US" in the existing access level ensures that only users accessing from US locations can utilize the access policy and access these resources.
upvoted 1 times
...

Question 303

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 303 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 303
Topic #: 1
[All Professional Cloud Security Engineer Questions]

There is a threat actor that is targeting organizations like yours. Attacks are always initiated from a known IP address range. You want to deny-list those IPs for your website, which is exposed to the internet through an Application Load Balancer. What should you do?

  • A. Create a Cloud Armor policy with a deny-rule for the known IP address range. Attach the policy to the backend of the Application Load Balancer.
  • B. Activate Identity-Aware Proxy for the backend of the Application Load Balancer. Create a firewall rule that only allows traffic from the proxy to the application.
  • C. Create a log sink with a filter containing the known IP address range. Trigger an alert that detects when the Application Load Balancer is accessed from those IPs.
  • D. Create a Cloud Firewall policy with a deny-rule for the known IP address range. Associate the firewall policy to the Virtual Private Cloud with the application backend.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
json4u
5 months, 4 weeks ago
Selected Answer: A
It's A.
upvoted 1 times
...
abdelrahman89
6 months, 1 week ago
A - Cloud Armor: Cloud Armor is a web application firewall (WAF) that provides DDoS protection and advanced security features. Creating a deny-rule for the known IP address range will effectively block traffic from those IPs, preventing them from reaching your website. Application Load Balancer Integration: Attaching the Cloud Armor policy to the backend of the Application Load Balancer ensures that the policy is applied to all traffic entering your website, regardless of the specific backend instance.
upvoted 2 times
...

Question 304

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 304 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 304
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are managing a Google Cloud environment that is organized into folders that represent different teams. These teams need the flexibility to modify organization policies relevant to their work. You want to grant the teams the necessary permissions while upholding Google-recommended security practices and minimizing administrative complexity. What should you do?

  • A. Create a custom IAM role with the organization policy administrator permission and grant the permission to each team’s folder. Limit policy modifications based on folder names within the custom role’s definition.
  • B. Assign the organization policy administrator role to a central service account and provide teams with the credentials to use the service account when needed.
  • C. Create an organization-level tag. Attach the tag to relevant folders. Use an IAM condition to restrict the organization policy administrator role to resources with that tag.
  • D. Grant each team the organization policy administrator role at the organization level.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
p981pa123
2 months, 2 weeks ago
Selected Answer: A
Tags in Google Cloud are primarily designed for organizing and categorizing resources.While it's possible to create IAM conditions that reference tags (e.g., limiting the use of a role to resources with specific tags), this method is not the most intuitive or straightforward way to manage IAM policies, especially when the main goal is to provide flexible policy management for different teams. In your case, folder-based isolation with custom IAM roles is a cleaner and more intuitive way to achieve team-level control over organization policies
upvoted 1 times
...
json4u
5 months, 4 weeks ago
Selected Answer: C
It's C.
upvoted 1 times
...
abdelrahman89
6 months, 1 week ago
C - Granular Control: Creating an organization-level tag allows you to precisely control which teams have access to modify organization policies by attaching the tag to relevant folders. This ensures that only authorized teams can make changes. IAM Condition: Using an IAM condition to restrict the organization policy administrator role to resources with the tag provides a flexible and efficient way to grant permissions while maintaining control. This ensures that the role is only accessible for the intended teams. Security Best Practices: This approach aligns with Google-recommended security practices by limiting access to organization policies to authorized teams and using IAM conditions to enforce appropriate controls. Administrative Efficiency: This approach simplifies administration by providing a centralized mechanism for managing permissions and ensuring that only authorized teams can modify organization policies.
upvoted 2 times
...

Question 305

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 305 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 305
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization is using Vertex AI Workbench Instances. You must ensure that newly deployed Instances are automatically kept up-to-date and that users cannot accidentally alter settings in the operating system. What should you do?

  • A. Enforce the disableRootAccesa and requireAutoUpgradeSchedule organization policies for newly deployed Instances.
  • B. Enable the VM Manager and ensure the corresponding Google Compute Engine instances are added.
  • C. Implement a firewall rule that prevents Secure Shell access to the corresponding Google Compute Engine instances by using tags.
  • D. Assign the AI Notebooks Runner and AI Notebooks Viewer roles to the users of the AI Workbench Instances.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Pime13
4 months ago
Selected Answer: A
https://cloud.google.com/vertex-ai/docs/workbench/instances/manage-metadata
upvoted 1 times
...
BPzen
4 months, 2 weeks ago
Selected Answer: B
Why B is Correct: VM Manager: VM Manager automates the management of Compute Engine instances, including patch management and configuration updates. By enabling VM Manager, you ensure that operating systems of Vertex AI Workbench instances are automatically kept up-to-date with the latest security patches and updates. Automatic Enrollment: When VM Manager is enabled, you can enroll the corresponding GCE instances and enforce compliance with organizational policies. Control Over System Configurations: VM Manager allows you to enforce configuration settings, preventing users from making unauthorized changes to the OS.
upvoted 1 times
...
json4u
5 months, 4 weeks ago
Selected Answer: A
It's A. Well explained below.
upvoted 2 times
...
abdelrahman89
6 months, 1 week ago
A - disableRootAccess: This organization policy prevents users from accessing the root account of the underlying Google Compute Engine instance, which helps to prevent accidental configuration changes. requireAutoUpgradeSchedule: This organization policy ensures that instances are automatically upgraded to the latest operating system patches, keeping them secure and up-to-date.
upvoted 3 times
...

Question 306

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 306 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 306
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You must ensure that the keys used for at-rest encryption of your data are compliant with your organization's security controls. One security control mandates that keys get rotated every 90 days. You must implement an effective detection strategy to validate if keys are rotated as required. What should you do?

  • A. Analyze the crypto key versions of the keys by using data from Cloud Asset Inventory. If an active key is older than 90 days, send an alert message through your incident notification channel.
  • B. Assess the keys in the Cloud Key Management Service by implementing code in Cloud Run. If a key is not rotated after 90 days, raise a finding in Security Command Center.
  • C. Define a metric that checks for timely key updates by using Cloud Logging. If a key is not rotated after 90 days, send an alert message through your incident notification channel.
  • D. Identify keys that have not been rotated by using Security Health Analytics. If a key is not rotated after 90 days, a finding in Security Command Center is raised.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Pime13
4 months ago
Selected Answer: D
https://cloud.google.com/security-command-center/docs/how-to-remediate-security-health-analytics-findings#kms_key_not_rotated
upvoted 1 times
...
BPzen
4 months, 2 weeks ago
Selected Answer: A
Why A is Correct: Cloud Asset Inventory: Cloud Asset Inventory offers a detailed view of cryptographic keys, including the age of each key version. By periodically analyzing this data, you can determine if a key version has been in use for more than 90 days. Proactive Monitoring: This approach allows you to set up automated checks and send alerts to incident notification channels (e.g., email, Slack, PagerDuty) when keys exceed the allowed age.
upvoted 1 times
...
MoAk
4 months, 2 weeks ago
Selected Answer: D
D - https://cloud.google.com/security-command-center/docs/how-to-remediate-security-health-analytics-findings#kms_key_not_rotated
upvoted 2 times
...
jmaquino
5 months, 1 week ago
Selected Answer: A
https://cloud.google.com/secret-manager/docs/analyze-resources?hl=es-419
upvoted 1 times
...
koo_kai
6 months ago
Selected Answer: D
It's D https://cloud.google.com/security-command-center/docs/how-to-remediate-security-health-analytics-findings#kms_key_not_rotated
upvoted 4 times
...
siheom
6 months ago
Selected Answer: A
VOTE A
upvoted 1 times
...
abdelrahman89
6 months, 1 week ago
D - Security Health Analytics: Security Health Analytics is a specialized tool designed to assess the security posture of your Google Cloud environment. It can effectively identify keys that have not been rotated within the specified timeframe. Finding in Security Command Center: Raising a finding in Security Command Center ensures that the non-compliance issue is clearly documented and can be addressed promptly. Efficiency: Security Health Analytics provides a streamlined and efficient way to monitor key rotation compliance without requiring custom code or manual analysis.
upvoted 4 times
...

Question 307

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 307 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 307
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization is developing a sophisticated machine learning (ML) model to predict customer behavior for targeted marketing campaigns. The BigQuery dataset used for training includes sensitive personal information. You must design the security controls around the AI/ML pipeline. Data privacy must be maintained throughout the model’s lifecycle and you must ensure that personal data is not used in the training process. Additionally, you must restrict access to the dataset to an authorized subset of people only. What should you do?

  • A. De-identify sensitive data before model training by using Cloud Data Loss Prevention (DLP)APIs. and implement strict Identity and Access Management (IAM) policies to control access to BigQuery.
  • B. Implement Identity-Aware Proxy to enforce context-aware access to BigQuery and models based on user identity and device.
  • C. Implement at-rest encryption by using customer-managed encryption keys (CMEK) for the pipeline. Implement strict Identity and Access Management (IAM) policies to control access to BigQuery.
  • D. Deploy the model on Confidential VMs for enhanced protection of data and code while in use. Implement strict Identity and Access Management (IAM) policies to control access to BigQuery.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
532b5da
4 months, 2 weeks ago
Selected Answer: A
Ans is A We want data privacy through out lifecycle. C is at rest D is in use B says nothing about data privacy
upvoted 1 times
...
json4u
5 months, 4 weeks ago
Selected Answer: A
It's A Well explained below.
upvoted 1 times
...
abdelrahman89
6 months, 1 week ago
A - Data De-identification: De-identifying sensitive data using Cloud DLP APIs ensures that the data used for model training does not contain personally identifiable information (PII). This protects data privacy and reduces the risk of unauthorized access or misuse. IAM Policies: Implementing strict IAM policies controls access to BigQuery, ensuring that only authorized personnel can access and use the dataset. This further protects data privacy and reduces the risk of unauthorized access. Comprehensive Approach: This approach combines data de-identification and IAM controls to provide a robust and effective security solution for the AI/ML pipeline.
upvoted 1 times
...

Question 308

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 308 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 308
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization wants to publish yearly reports of your website usage analytics. You must ensure that no data with personally identifiable information (PII) is published by using the Cloud Data Loss Prevention (Cloud DLP) API. Data integrity must be preserved. What should you do?

  • A. Detect all PII in storage by using the Cloud DLP API. Create a cloud function to delete the PII.
  • B. Discover and quarantine your PII data in your storage by using the Cloud DLP API.
  • C. Discover and transform PII data in your reports by using the Cloud DLP API.
  • D. Encrypt the PII from the report by using the Cloud DLP API.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
json4u
5 months, 4 weeks ago
Selected Answer: C
It's C. Well explained below.
upvoted 2 times
...
abdelrahman89
6 months, 1 week ago
C - Data Discovery: Cloud DLP API can effectively discover PII within your reports, identifying sensitive information that needs to be protected. Data Transformation: Once PII is detected, Cloud DLP can transform it into a format that removes personally identifiable elements, such as anonymization or generalization. This ensures that the data remains usable for analytics purposes while protecting privacy. Data Integrity: By transforming PII rather than deleting it, you preserve the overall structure and context of the data, maintaining its integrity for analysis.
upvoted 4 times
...

Question 309

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 309 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 309
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your development team is launching a new application. The new application has a microservices architecture on Compute Engine instances and serverless components, including Cloud Functions. This application will process financial transactions that require temporary, highly sensitive data in memory. You need to secure data in use during computations with a focus on minimizing the risk of unauthorized access to memory for this financial application. What should you do?

  • A. Enable Confidential VM instances for Compute Engine, and ensure that relevant Cloud Functions can leverage hardware-based memory isolation.
  • B. Use data masking and tokenization techniques on sensitive financial data fields throughout the application and the application's data processing workflows.
  • C. Use the Cloud Data Loss Prevention (Cloud DLP) API to scan and mask sensitive data before feeding the data into any compute environment.
  • D. Store all sensitive data during processing in Cloud Storage by using customer-managed encryption keys (CMEK), and set strict bucket-level permissions.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
json4u
5 months, 4 weeks ago
Selected Answer: A
It's A. Well explained in abdelrahman89's comment.
upvoted 1 times
...
abdelrahman89
6 months, 1 week ago
A - Confidential VMs: Using Confidential VMs provides a strong security boundary around the memory of the VM instances, protecting sensitive data from unauthorized access, even if the VM is compromised. Hardware-Based Memory Isolation: Leveraging hardware-based memory isolation ensures that the data within the VM's memory is protected by hardware-enforced mechanisms, making it significantly more difficult for attackers to access. Comprehensive Protection: This approach provides a comprehensive solution for securing data in use, as it combines both software-based (Confidential VMs) and hardware-based (memory isolation) protections.
upvoted 2 times
nah99
4 months, 1 week ago
I would think A, but how do Cloud Functions leverage hardware-based memory isolation? is this you or chatgpt speaking
upvoted 1 times
...
...

Question 310

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 310 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 310
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You work for a financial organization in a highly regulated industry that is subject to active regulatory compliance. To meet compliance requirements, you need to continuously maintain a specific set of configurations, data residency, organizational policies, and personnel data access controls. What should you do?

  • A. Apply an organizational policy constraint at the organization level to limit the location of new resource creation.
  • B. Create an Assured Workloads folder for your required compliance program to apply defined controls and requirements.
  • C. Go to the Compliance page in Security Command Center. View the report for your status against the required compliance standard. Triage violations to maintain compliance on a regular basis.
  • D. Create a posture.yaml file with the required security compliance posture. Apply the posture with the gcloud scc postures create
    POSTURE_NAME --posture-from-file=posture.yaml command in Security Command Center Premium.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Pime13
4 months ago
Selected Answer: B
https://cloud.google.com/assured-workloads/docs/overview#when_to_use_assured_workloads
upvoted 1 times
...
Pime13
4 months ago
Selected Answer: B
https://cloud.google.com/assured-workloads/docs/key-concepts
upvoted 1 times
...
BondleB
5 months, 2 weeks ago
https://cloud.google.com/assured-workloads/docs/key-concepts#:~:text=Assured%20Workloads%20provides%20Google%20Cloud,information%20about%20its%20key%20components.
upvoted 1 times
...
abdelrahman89
5 months, 2 weeks ago
Selected Answer: B
Answer B
upvoted 2 times
...

Question 311

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 311 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 311
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization is worried about recent news headlines regarding application vulnerabilities in production applications that have led to security breaches. You want to automatically scan your deployment pipeline for vulnerabilities and ensure only scanned and verified containers can run in the environment. What should you do?

  • A. Use Kubernetes role-based access control (RBAC) as the source of truth for cluster access by granting “container.clusters.get” to limited users. Restrict deployment access by allowing these users to generate a kubeconfig file containing the configuration access to the GKE cluster.
  • B. Use gcloud artifacts docker images describe LOCATION-docker.pkg.dev/PROJECT_ID/REPOSITORY/IMAGE_ID@sha256:HASH --show-package-vulnerability in your CI/CD pipeline, and trigger a pipeline failure for critical vulnerabilities.
  • C. Enforce the use of Cloud Code for development so users receive real-time security feedback on vulnerable libraries and dependencies before they check in their code.
  • D. Enable Binary Authorization and create attestations of scans.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
BondleB
5 months, 2 weeks ago
https://cloud.google.com/binary-authorization/docs/attestations D
upvoted 2 times
...
jmaquino
5 months, 2 weeks ago
D: https://cloud.google.com/binary-authorization/docs/making-attestations?hl=es-419
upvoted 2 times
...
abdelrahman89
5 months, 2 weeks ago
Selected Answer: D
Answer D
upvoted 2 times
...

Question 312

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 312 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 312
Topic #: 1
[All Professional Cloud Security Engineer Questions]

A team at your organization collects logs in an on-premises security information and event management system (SIEM). You must provide a subset of Google Cloud logs for the SIEM, and minimize the risk of data exposure in your cloud environment. What should you do?

  • A. Create a new BigQuery dataset. Stream all logs to this dataset. Provide the on-premises SIEM system access to the data in BigQuery by using workload identity federation and let the SIEM team filter for the relevant log data.
  • B. Define a log view for the relevant logs. Provide access to the log view to a principal from your on-premises identity provider by using workforce identity federation.
  • C. Create a log sink for the relevant logs. Send the logs to Pub/Sub. Retrieve the logs from Pub/Sub and push the logs to the SIEM by using Dataflow.
  • D. Filter for the relevant logs. Store the logs in a Cloud Storage bucket. Grant the service account access to the bucket. Provide the service account key to the SIEM team.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Popa
1 month, 2 weeks ago
Selected Answer: B
Option C, does involve setting up multiple components (Pub/Sub, Dataflow, log sinks) and ensuring they are properly configured. This might add to the complexity of the setup. That being said, option B is still a strong choice because it provides a more straightforward approach to controlling and accessing the logs using log views and identity federation
upvoted 1 times
...
KLei
3 months, 3 weeks ago
Selected Answer: C
B: Defining a log view provides access control but does not facilitate exporting logs to an external SIEM effectively.
upvoted 1 times
...
BPzen
4 months, 2 weeks ago
Selected Answer: C
Why C is Correct: Log Sink for Filtering: A log sink allows you to filter and export only the relevant logs, ensuring unnecessary data is not sent, which reduces the risk of data exposure. Pub/Sub for Delivery: Exporting logs to Pub/Sub enables real-time streaming of filtered logs to external systems. This ensures the SIEM receives logs promptly and securely. Dataflow for Transformation and Transfer: Use Dataflow to process and transform logs as needed before pushing them to the on-premises SIEM.
upvoted 2 times
...
MoAk
4 months, 2 weeks ago
Selected Answer: C
Answer C.
upvoted 1 times
...
kalbd2212
4 months, 3 weeks ago
going with C..
upvoted 2 times
...
irene062
4 months, 4 weeks ago
Selected Answer: B
Log views let you grant a user access to only a subset of the logs stored in a log bucket. https://cloud.google.com/logging/docs/logs-views
upvoted 1 times
...
abdelrahman89
5 months, 2 weeks ago
Selected Answer: B
Answer B
upvoted 1 times
...

Question 313

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 313 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 313
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your Google Cloud organization is subdivided into three folders: production, development, and networking, Networking resources for the organization are centrally managed in the networking folder. You discovered that projects in the production folder are attaching to Shared VPCs that are outside of the networking folder which could become a data exfiltration risk. You must resolve the production folder issue without impacting the development folder. You need to use the most efficient and least disruptive approach. What should you do?

  • A. Enable the Restrict Shared VPC Host Projects organization policy on the production folder. Create a custom rule and configure the policy type to Allow. In the Custom value section, enter under:folders/networking.
  • B. Enable the Restrict Shared VPC Host Projects organization policy on the networking folder only. Create a new custom rule and configure the policy type to Allow. In the Custom value section, enter under:organizations/123456739123.
  • C. Enable the Restrict Shared VPC Host Projects organization policy at the project level for each of the production projects. Create a custom rule and configure the policy type to Allow. In the Custom value section, enter under:folders/networking.
  • D. Enable the Restrict Shared VPC Host Projects organization policy at the organization level. Create a custom rule and configure the policy type to Allow. In the Custom value section, enter under:folders/networking.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
MoAk
4 months, 2 weeks ago
Selected Answer: A
Rest don't make sense tbh.
upvoted 1 times
...
abdelrahman89
5 months, 2 weeks ago
Selected Answer: A
Answer A
upvoted 2 times
...

Question 314

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 314 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 314
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization operates in a highly regulated environment and has a stringent set of compliance requirements for protecting customer data. You must encrypt data while in use to meet regulations. What should you do?

  • A. Enable the use of customer-supplied encryption keys (CSEK) keys in the Google Compute Engine VMs to give your organization maximum control over their VM disk encryption.
  • B. Establish a trusted execution environment with a Confidential VM.
  • C. Use a Shielded VM to ensure a secure boot with integrity monitoring for the application environment.
  • D. Use customer-managed encryption keys (CMEK) and Cloud KSM to enable your organization to control their keys for data encryption in Cloud SQL.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Pime13
4 months ago
Selected Answer: B
https://cloud.google.com/confidential-computing/confidential-vm/docs/confidential-vm-overview
upvoted 1 times
...
jmaquino
5 months, 2 weeks ago
B: https://cloud.google.com/security/products/confidential-computing?hl=es-419
upvoted 2 times
...
abdelrahman89
5 months, 2 weeks ago
Selected Answer: B
Answer B
upvoted 3 times
...

Question 315

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 315 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 315
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization is building a real-time recommendation engine using ML models that process live user activity data stored in BigQuery and Cloud Storage. Each new model developed is saved to Artifact Registry. This new system deploys models to Google Kubernetes Engine, and uses Pub/Sub for message queues. Recent industry news have been reporting attacks exploiting ML model supply chains. You need to enhance the security in this serverless architecture, specifically against risks to the development and deployment pipeline. What should you do?

  • A. Enable container image vulnerability scanning during development and pre-deployment. Enforce Binary Authorization on images deployed from Artifact Registry to your continuous integration and continuous deployment (CVCD) pipeline.
  • B. Thoroughly sanitize all training data prior to model development to reduce risk of poisoning attacks. Use IAM for authorization, and apply role-based restrictions to code repositories and cloud services.
  • C. Limit external libraries and dependencies that are used for the ML models as much as possible. Continuously rotate encryption keys that are used to access the user data from BigQuery and Cloud Storage.
  • D. Develop strict firewall rules to limit external traffic to Cloud Run instances. Integrate intrusion detection systems (IDS) for real-time anomaly detection on Pub/Sub message flows.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
JohnDohertyDoe
3 months, 1 week ago
Selected Answer: A
A should be the answer. Supply chain risks happen by exploiting vulnerabilities in the images. So scanning the image and blocking deployment secures against supply chain risks. This also matches with the requirement related to the deployment pipeline.
upvoted 1 times
...
zanhsieh
4 months ago
Selected Answer: D
The question asked "...attacks exploiting ML model supply chains" and "...risks to the development and deployment pipeline", so we should look anything related to these: A: No. Image scanning and enfore binary authorization only secure the end artifact. B and C: No. Nothing related to secure development and deployment pipeline. D: Yes, although this option just mentioned very shallow on how to implement them, e.g. IDS on pub/sub -> FortiSIEM, resticting network ingress for cloud run. https://cloud.google.com/run/docs/securing/ingress#yaml
upvoted 1 times
...
abdelrahman89
5 months, 2 weeks ago
Selected Answer: A
Answer A
upvoted 2 times
...

Question 316

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 316 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 316
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You want to set up a secure, internal network within Google Cloud for database servers. The servers must not have any direct communication with the public internet. What should you do?

  • A. Assign a private IP address to each database server. Use a NAT gateway to provide internet connectivity to the database servers.
  • B. Assign a static public IP address to each database server. Use firewall rules to restrict external access.
  • C. Create a VPC with a private subnet. Assign a private IP address to each database server.
  • D. Assign both a private IP address and a public IP address to each database server.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
YourFriendlyNeighborhoodSpider
3 weeks, 2 days ago
Selected Answer: C
If the question wanted you to allow INDIRECT access (like NAT), it should have been clearer about that. Instead, it's leaving room for pointless debate. In real-world best practices, databases should be in a private subnet with zero internet exposure unless absolutely required (e.g., for updates via a controlled egress path). So yeah, the question is badly worded, and people arguing for NAT are just nitpicking "direct" instead of focusing on security principles!!! SO ANSWER "C" MY DEARS!
upvoted 1 times
...
dlenehan
3 months, 1 week ago
Selected Answer: A
Allows indirect access to internet. Other options are more focused on direct access.
upvoted 2 times
...
Zek
4 months ago
Selected Answer: A
I think A because it says "The servers must not have any direct communication with the public internet." Not direct bur suggest can be indirect access to internet
upvoted 3 times
...
dv1
5 months, 2 weeks ago
A seems better to me, as the question says "db servers must not have DIRECT access to the internet".
upvoted 4 times
YourFriendlyNeighborhoodSpider
3 weeks, 2 days ago
if they meant to say that the VMs need "indirect" access to Internet - it would have been mentioned.
upvoted 1 times
...
MoAk
4 months, 2 weeks ago
This is the way.
upvoted 1 times
JohnDohertyDoe
3 months ago
But the question asks to create an internal network, not sure if they need internet access.
upvoted 2 times
...
...
...
abdelrahman89
5 months, 2 weeks ago
Selected Answer: C
Answer C
upvoted 3 times
YourFriendlyNeighborhoodSpider
3 weeks, 2 days ago
ABSOLUTELY RIGHT MY FRIEND
upvoted 1 times
...
...

Question 317

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 317 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 317
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You work for a large organization that recently implemented a 100GB Cloud Interconnect connection between your Google Cloud and your on-premises edge router. While routinely checking the connectivity, you noticed that the connection is operational but there is an error message that indicates MACsec is operationally down. You need to resolve this error. What should you do?

  • A. Ensure that the Cloud Interconnect connection supports MACsec.
  • B. Ensure that the on-premises router is not down.
  • C. Ensure that the active pre-shared key created for MACsec is not expired on both the on-premises and Google edge routers.
  • D. Ensure that the active pre-shared key matches on both the on-premises and Google edge routers.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Pime13
4 months ago
Selected Answer: D
You successfully enabled MACsec on your Cloud Interconnect connection and on your on-premises router, but the MACsec session displays that it is operationally down on your Cloud Interconnect connection links. The issue could be caused by one of the following: The active keys on your on-premises router and Google's edge routers don't match. A MACsec protocol mismatch exists between your on-premises router and Google's edge router. https://cloud.google.com/network-connectivity/docs/interconnect/how-to/macsec/troubleshoot-macsec
upvoted 1 times
...
MoAk
4 months, 2 weeks ago
Selected Answer: D
D, rather than C here since its a new implementation and unlikely that it will be the PSK expired.
upvoted 1 times
...
BondleB
5 months, 1 week ago
https://cloud.google.com/network-connectivity/docs/interconnect/how-to/macsec/troubleshoot-macsec D
upvoted 2 times
...
abdelrahman89
5 months, 2 weeks ago
Selected Answer: D
Answer D
upvoted 2 times
...

Question 318

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 318 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 318
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization must store highly sensitive data within Google Cloud. You need to design a solution that provides the strongest level of security and control. What should you do?

  • A. Use Cloud Storage with customer-supplied encryption keys (CSEK), VPC Service Controls for network isolation, and Cloud DLP for data inspection.
  • B. Use Cloud Storage with customer-managed encryption keys (CMEK), Cloud DLP for data classification, and Secret Manager for storing API access tokens.
  • C. Use Cloud Storage with client-side encryption, Cloud KMS for key management, and Cloud HSM for cryptographic operations.
  • D. Use Cloud Storage with server-side encryption, BigQuery with column-level encryption, and IAM roles for access control.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
YourFriendlyNeighborhoodSpider
3 weeks, 2 days ago
Selected Answer: B
HSM is only for regulatory purpose and Client-side encryption won't provide highest security. Do not for a second think it's C, when you have B as an option. Why Option B is Correct? Cloud Storage with CMEK: CMEK (Customer-Managed Encryption Keys) allows you to manage your own encryption keys, providing you with full control over your data encryption at rest. It ensures that Google Cloud can store and process your data, but the encryption keys remain under your control, enhancing security. Cloud DLP (Data Loss Prevention): Cloud DLP helps you inspect, classify, and redact sensitive data, such as personally identifiable information (PII), before it's stored or processed. This is crucial for compliance and risk management. Secret Manager: Secret Manager is a service for securely storing API keys, passwords, certificates, and other sensitive data. By using Secret Manager, you ensure that access tokens and secrets are encrypted and controlled with IAM access policies, further increasing the security posture.
upvoted 1 times
...
KLei
3 months, 3 weeks ago
Selected Answer: C
A more suitable option would involve using Cloud HSMs in conjunction with other strong security measures such as CMEKs and Cloud DLP.
upvoted 1 times
...
MoAk
4 months, 2 weeks ago
Selected Answer: C
Highly Secure etc = HSM
upvoted 1 times
...
vamgcp
4 months, 2 weeks ago
Selected Answer: C
Client-Side Encryption: Encrypting data before it leaves your control ensures that even if someone gains access to your Cloud Storage bucket, they cannot decrypt the data without the encryption keys. This provides an extra layer of protection against unauthorized access or data breaches. Cloud KMS: Cloud KMS provides a secure and managed service for generating and storing your encryption keys.1 You can control key access with granular IAM permissions and audit all key operations. Cloud HSM: Cloud HSM takes key security to the next level by using dedicated, tamper-resistant hardware security modules (HSMs) to generate and protect your keys. This offers the highest level of protection against key compromise.
upvoted 2 times
...
abdelrahman89
5 months, 2 weeks ago
Selected Answer: C
Answer C
upvoted 2 times
...

Question 319

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 319 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 319
Topic #: 1
[All Professional Cloud Security Engineer Questions]

The InfoSec team has mandated that all new Cloud Run jobs and services in production must have Binary Authorization enabled. You need to enforce this requirement. What should you do?

  • A. Configure an organization policy to require Binary Authorization enforcement on images deployed to Cloud Run.
  • B. Configure a Security Health Analytics (SHA) custom rule that prevents the execution of Cloud Run jobs and services without Binary Authorization.
  • C. Ensure the Cloud Run admin role is not assigned to developers.
  • D. Configure a Binary Authorization custom policy that is not editable by developers and auto-attaches to all Cloud Run jobs and services.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Pime13
4 months ago
Selected Answer: A
https://cloud.google.com/binary-authorization/docs/run/requiring-binauthz-cloud-run
upvoted 1 times
...
abdelrahman89
5 months, 2 weeks ago
Selected Answer: A
Answer A
upvoted 2 times
...

Question 320

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 320 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 320
Topic #: 1
[All Professional Cloud Security Engineer Questions]

You are developing an application that runs on a Compute Engine VM. The application needs to access data stored in Cloud Storage buckets in other Google Cloud projects. The required access to the buckets is variable. You need to provide access to these resources while following Google- recommended practices. What should you do?

  • A. Limit the VMs access to the Cloud Storage buckets by setting the relevant access scope of the VM.
  • B. Create IAM bindings for the VM’s service account and the required buckets that allow appropriate access to the data stored in the buckets.
  • C. Grant the VM's service account access to the required buckets by using domain-wide delegation.
  • D. Create a group and assign IAM bindings to the group for each bucket that the application needs to access. Assign the VM's service account to the group.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
MoAk
4 months, 2 weeks ago
Selected Answer: B
well explained below
upvoted 1 times
MoAk
4 months, 1 week ago
https://cloud.google.com/iam/docs/best-practices-service-accounts#groups The reason why D is bad in case anyone was conflicted.
upvoted 3 times
...
...
vamgcp
4 months, 2 weeks ago
Selected Answer: B
Directly assigning IAM bindings to the VM's service account for each Cloud Storage bucket provides the most secure and flexible way to manage access to your data. This approach adheres to the principle of least privilege and allows you to adapt to changing access requirements with ease. While groups can be useful for managing permissions for multiple VMs, it adds an extra layer of complexity when dealing with a single application on one VM.
upvoted 2 times
...
abdelrahman89
5 months, 2 weeks ago
Selected Answer: B
Answer B
upvoted 1 times
...

Question 321

exam questions

Exam Professional Cloud Security Engineer All Questions

View all questions & answers for the Professional Cloud Security Engineer exam

Exam Professional Cloud Security Engineer topic 1 question 321 discussion

Actual exam question from Google's Professional Cloud Security Engineer
Question #: 321
Topic #: 1
[All Professional Cloud Security Engineer Questions]

Your organization strives to be a market leader in software innovation. You provided a large number of Google Cloud environments so developers can test the integration of Gemini in Vertex AI into their existing applications or create new projects. Your organization has 200 developers and a five-person security team. You must prevent and detect proper security policies across the Google Cloud environments. What should you do? (Choose two.)

  • A. Apply organization policy constraints. Detect and monitor drifts by using Security Health Analytics.
  • B. Publish internal policies and clear guidelines to securely develop applications.
  • C. Use Cloud Logging to create log filters to detect misconfigurations. Trigger Cloud Run functions to remediate misconfigurations.
  • D. Apply a predefined AI-recommended security posture template for Gemini in Vertex AI in Security Command Center Enterprise or Premium tiers.
  • E. Implement the least privileged access Identity and Access Management roles to prevent misconfigurations.
Show Suggested Answer Hide Answer
Suggested Answer: AD 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
YourFriendlyNeighborhoodSpider
3 weeks, 2 days ago
Selected Answer: AD
I agree with nah99, A and D seems reasonable given Vertex AI is mentioned.
upvoted 1 times
...
nah99
4 months, 1 week ago
Selected Answer: AD
Specifically mentions gemini/vertex, so definitely D. https://cloud.google.com/security-command-center/docs/security-posture-essentials-secure-ai-template A & E are both good, but the requirement is prevent and detect, which better lines to A.
upvoted 2 times
MoAk
4 months, 1 week ago
A & D for sure.
upvoted 1 times
...
...
BPzen
4 months, 2 weeks ago
Selected Answer: AE
A. Apply organization policy constraints. Detect and monitor drifts by using Security Health Analytics. Organization Policies: Enforcing organization policies (e.g., constraints on resource locations, API access, or service usage) helps standardize security practices across all environments. Developers can create and test environments without bypassing critical security controls. Security Health Analytics (SHA): SHA, available in Security Command Center Premium, detects and alerts on violations of security best practices and misconfigurations, such as overly permissive roles or public resource exposure. E. Implement the least privileged access Identity and Access Management roles to prevent misconfigurations. Least Privileged Access: Assigning IAM roles based on the principle of least privilege prevents users from making changes outside their scope of work, reducing misconfiguration risks.
upvoted 1 times
...
abdelrahman89
5 months, 2 weeks ago
Selected Answer: AD
Answer A D
upvoted 1 times
...